CINXE.COM

Search results for: facial recognition

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: facial recognition</title> <meta name="description" content="Search results for: facial recognition"> <meta name="keywords" content="facial recognition"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="facial recognition" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="facial recognition"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 859</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: facial recognition</h1> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">859</span> Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Vesna%20Kirandziska">Vesna Kirandziska</a>, <a href="https://publications.waset.org/search?q=Nevena%20Ackovska"> Nevena Ackovska</a>, <a href="https://publications.waset.org/search?q=Ana%20Madevska%20Bogdanova"> Ana Madevska Bogdanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Emotion%20recognition" title="Emotion recognition">Emotion recognition</a>, <a href="https://publications.waset.org/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/search?q=machine%20learning." title=" machine learning. "> machine learning. </a> </p> <a href="https://publications.waset.org/10004221/comparing-emotion-recognition-from-voice-and-facial-data-using-time-invariant-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10004221/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10004221/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10004221/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10004221/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10004221/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10004221/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10004221/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10004221/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10004221/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10004221/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10004221.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2019</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">858</span> A Survey on Facial Feature Points Detection Techniques and Approaches</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Rachid%20Ahdid">Rachid Ahdid</a>, <a href="https://publications.waset.org/search?q=Khaddouj%20Taifi"> Khaddouj Taifi</a>, <a href="https://publications.waset.org/search?q=Said%20Safi"> Said Safi</a>, <a href="https://publications.waset.org/search?q=Bouzid%20Manaut"> Bouzid Manaut</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic detection of facial feature points plays an important role in applications such as facial feature tracking, human-machine interaction and face recognition. The majority of facial feature points detection methods using two-dimensional or three-dimensional data are covered in existing survey papers. In this article chosen approaches to the facial features detection have been gathered and described. This overview focuses on the class of researches exploiting facial feature points detection to represent facial surface for two-dimensional or three-dimensional face. In the conclusion, we discusses advantages and disadvantages of the presented algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Facial%20feature%20points" title="Facial feature points">Facial feature points</a>, <a href="https://publications.waset.org/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/search?q=facial%20feature%0D%0Atracking" title=" facial feature tracking"> facial feature tracking</a>, <a href="https://publications.waset.org/search?q=two-dimensional%20data" title=" two-dimensional data"> two-dimensional data</a>, <a href="https://publications.waset.org/search?q=three-dimensional%20data." title=" three-dimensional data."> three-dimensional data.</a> </p> <a href="https://publications.waset.org/10005826/a-survey-on-facial-feature-points-detection-techniques-and-approaches" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10005826/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10005826/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10005826/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10005826/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10005826/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10005826/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10005826/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10005826/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10005826/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10005826/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10005826.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1681</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">857</span> Deep-Learning Based Approach to Facial Emotion Recognition Through Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Nouha%20Khediri">Nouha Khediri</a>, <a href="https://publications.waset.org/search?q=Mohammed%20Ben%20Ammar"> Mohammed Ben Ammar</a>, <a href="https://publications.waset.org/search?q=Monji%20Kherallah"> Monji Kherallah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Recently, facial emotion recognition (FER) has become increasingly essential to understand the state of the human mind. However, accurately classifying emotion from the face is a challenging task. In this paper, we present a facial emotion recognition approach named CV-FER benefiting from deep learning, especially CNN and VGG16. First, the data are pre-processed with data cleaning and data rotation. Then, we augment the data and proceed to our FER model, which contains five convolutions layers and five pooling layers. Finally, a softmax classifier is used in the output layer to recognize emotions. Based on the above contents, this paper reviews the works of facial emotion recognition based on deep learning. Experiments show that our model outperforms the other methods using the same FER2013 database and yields a recognition rate of 92%. We also put forward some suggestions for future work. </p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=CNN" title="CNN">CNN</a>, <a href="https://publications.waset.org/search?q=deep-learning" title=" deep-learning"> deep-learning</a>, <a href="https://publications.waset.org/search?q=facial%20emotion%20recognition" title=" facial emotion recognition"> facial emotion recognition</a>, <a href="https://publications.waset.org/search?q=machine%20learning." title=" machine learning."> machine learning.</a> </p> <a href="https://publications.waset.org/10012968/deep-learning-based-approach-to-facial-emotion-recognition-through-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10012968/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10012968/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10012968/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10012968/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10012968/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10012968/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10012968/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10012968/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10012968/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10012968/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10012968.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">710</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">856</span> Facial Recognition on the Basis of Facial Fragments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Tetyana%20Baydyk">Tetyana Baydyk</a>, <a href="https://publications.waset.org/search?q=Ernst%20Kussul"> Ernst Kussul</a>, <a href="https://publications.waset.org/search?q=Sandra%20Bonilla%20Meza"> Sandra Bonilla Meza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild<em>) </em>face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=Labeled%20Faces%20in%20the%20Wild%20%28LFW%29%20database" title=" Labeled Faces in the Wild (LFW) database"> Labeled Faces in the Wild (LFW) database</a>, <a href="https://publications.waset.org/search?q=Random%20Local%20Descriptor%20%28RLD%29" title=" Random Local Descriptor (RLD)"> Random Local Descriptor (RLD)</a>, <a href="https://publications.waset.org/search?q=random%20features." title=" random features."> random features.</a> </p> <a href="https://publications.waset.org/10007234/facial-recognition-on-the-basis-of-facial-fragments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10007234/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10007234/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10007234/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10007234/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10007234/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10007234/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10007234/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10007234/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10007234/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10007234/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10007234.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1014</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">855</span> Robust Face Recognition using AAM and Gabor Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sanghoon%20Kim">Sanghoon Kim</a>, <a href="https://publications.waset.org/search?q=Sun-Tae%20Chung"> Sun-Tae Chung</a>, <a href="https://publications.waset.org/search?q=Souhwan%20Jung"> Souhwan Jung</a>, <a href="https://publications.waset.org/search?q=Seoungseon%20Jeon"> Seoungseon Jeon</a>, <a href="https://publications.waset.org/search?q=Jaemin%20Kim"> Jaemin Kim</a>, <a href="https://publications.waset.org/search?q=Seongwon%20Cho"> Seongwon Cho</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a face recognition algorithm using AAM and Gabor features. Gabor feature vectors which are well known to be robust with respect to small variations of shape, scaling, rotation, distortion, illumination and poses in images are popularly employed for feature vectors for many object detection and recognition algorithms. EBGM, which is prominent among face recognition algorithms employing Gabor feature vectors, requires localization of facial feature points where Gabor feature vectors are extracted. However, localization method employed in EBGM is based on Gabor jet similarity and is sensitive to initial values. Wrong localization of facial feature points affects face recognition rate. AAM is known to be successfully applied to localization of facial feature points. In this paper, we devise a facial feature point localization method which first roughly estimate facial feature points using AAM and refine facial feature points using Gabor jet similarity-based facial feature localization method with initial points set by the rough facial feature points obtained from AAM, and propose a face recognition algorithm using the devised localization method for facial feature localization and Gabor feature vectors. It is observed through experiments that such a cascaded localization method based on both AAM and Gabor jet similarity is more robust than the localization method based on only Gabor jet similarity. Also, it is shown that the proposed face recognition algorithm using this devised localization method and Gabor feature vectors performs better than the conventional face recognition algorithm using Gabor jet similarity-based localization method and Gabor feature vectors like EBGM. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20Recognition" title="Face Recognition">Face Recognition</a>, <a href="https://publications.waset.org/search?q=AAM" title=" AAM"> AAM</a>, <a href="https://publications.waset.org/search?q=Gabor%20features" title=" Gabor features"> Gabor features</a>, <a href="https://publications.waset.org/search?q=EBGM." title=" EBGM."> EBGM.</a> </p> <a href="https://publications.waset.org/3583/robust-face-recognition-using-aam-and-gabor-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3583/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3583/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3583/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3583/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3583/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3583/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3583/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3583/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3583/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3583/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3583.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2208</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">854</span> Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Marie%20Alaghband">Marie Alaghband</a>, <a href="https://publications.waset.org/search?q=Niloofar%20Yousefi"> Niloofar Yousefi</a>, <a href="https://publications.waset.org/search?q=Ivan%20Garibay"> Ivan Garibay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of &quot;sad&quot;, &quot;surprise&quot;, &quot;fear&quot;, &quot;angry&quot;, &quot;neutral&quot;, &quot;disgust&quot;, and &quot;happy&quot;. We also considered the &quot;None&quot; class if the image&rsquo;s facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Annotated%20Facial%20Expression%20Dataset" title="Annotated Facial Expression Dataset">Annotated Facial Expression Dataset</a>, <a href="https://publications.waset.org/search?q=Sign%20Language%0D%0ARecognition" title=" Sign Language Recognition"> Sign Language Recognition</a>, <a href="https://publications.waset.org/search?q=Gesture%20Recognition" title=" Gesture Recognition"> Gesture Recognition</a>, <a href="https://publications.waset.org/search?q=Sequenced%20Facial%20Expression%0D%0ADataset." title=" Sequenced Facial Expression Dataset."> Sequenced Facial Expression Dataset.</a> </p> <a href="https://publications.waset.org/10011933/facial-expression-phoenix-feph-an-annotated-sequenced-dataset-for-facial-and-emotion-specified-expressions-in-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10011933/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10011933/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10011933/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10011933/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10011933/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10011933/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10011933/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10011933/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10011933/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10011933/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10011933.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">724</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">853</span> Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Ksheeraj%20Sai%20Vepuri">Ksheeraj Sai Vepuri</a>, <a href="https://publications.waset.org/search?q=Nada%20Attar"> Nada Attar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Facial%20expression%20recognition" title="Facial expression recognition">Facial expression recognition</a>, <a href="https://publications.waset.org/search?q=image%20pre-processing" title=" image pre-processing"> image pre-processing</a>, <a href="https://publications.waset.org/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/search?q=CNN." title=" CNN. "> CNN. </a> </p> <a href="https://publications.waset.org/10011940/improving-the-performance-of-deep-learning-in-facial-emotion-recognition-with-image-sharpening" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10011940/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10011940/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10011940/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10011940/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10011940/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10011940/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10011940/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10011940/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10011940/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10011940/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10011940.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">544</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">852</span> RBF Based Face Recognition and Expression Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Praseeda%20Lekshmi.V">Praseeda Lekshmi.V</a>, <a href="https://publications.waset.org/search?q=Dr.M.Sasikumar"> Dr.M.Sasikumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial recognition and expression analysis is rapidly becoming an area of intense interest in computer science and humancomputer interaction design communities. The most expressive way humans display emotions is through facial expressions. In this paper skin and non-skin pixels were separated. Face regions were extracted from the detected skin regions. Facial expressions are analyzed from facial images by applying Gabor wavelet transform (GWT) and Discrete Cosine Transform (DCT) on face images. Radial Basis Function (RBF) Network is used to identify the person and to classify the facial expressions. Our method reliably works even with faces, which carry heavy expressions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20Recognition" title="Face Recognition">Face Recognition</a>, <a href="https://publications.waset.org/search?q=Radial%20Basis%20Function" title=" Radial Basis Function"> Radial Basis Function</a>, <a href="https://publications.waset.org/search?q=Gabor%20Wavelet%20Transform" title="Gabor Wavelet Transform">Gabor Wavelet Transform</a>, <a href="https://publications.waset.org/search?q=Discrete%20Cosine%20Transform" title=" Discrete Cosine Transform"> Discrete Cosine Transform</a> </p> <a href="https://publications.waset.org/9404/rbf-based-face-recognition-and-expression-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9404/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9404/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9404/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9404/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9404/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9404/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9404/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9404/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9404/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9404/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9404.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1596</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">851</span> An Efficient Algorithm for Motion Detection Based Facial Expression Recognition using Optical Flow</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Ahmad%20R.%20Naghsh-Nilchi">Ahmad R. Naghsh-Nilchi</a>, <a href="https://publications.waset.org/search?q=Mohammad%20Roshanzamir"> Mohammad Roshanzamir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the popular methods for recognition of facial expressions such as happiness, sadness and surprise is based on deformation of facial features. Motion vectors which show these deformations can be specified by the optical flow. In this method, for detecting emotions, the resulted set of motion vectors are compared with standard deformation template that caused by facial expressions. In this paper, a new method is introduced to compute the quantity of likeness in order to make decision based on the importance of obtained vectors from an optical flow approach. For finding the vectors, one of the efficient optical flow method developed by Gautama and VanHulle[17] is used. The suggested method has been examined over Cohn-Kanade AU-Coded Facial Expression Database, one of the most comprehensive collections of test images available. The experimental results show that our method could correctly recognize the facial expressions in 94% of case studies. The results also show that only a few number of image frames (three frames) are sufficient to detect facial expressions with rate of success of about 83.3%. This is a significant improvement over the available methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Facial%20expression" title="Facial expression">Facial expression</a>, <a href="https://publications.waset.org/search?q=Facial%20features" title=" Facial features"> Facial features</a>, <a href="https://publications.waset.org/search?q=Optical%20flow" title=" Optical flow"> Optical flow</a>, <a href="https://publications.waset.org/search?q=Motion%20vectors." title="Motion vectors.">Motion vectors.</a> </p> <a href="https://publications.waset.org/6512/an-efficient-algorithm-for-motion-detection-based-facial-expression-recognition-using-optical-flow" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/6512/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/6512/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/6512/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/6512/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/6512/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/6512/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/6512/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/6512/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/6512/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/6512/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/6512.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2376</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">850</span> Human Facial Expression Recognition using MANFIS Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=V.%20Gomathi">V. Gomathi</a>, <a href="https://publications.waset.org/search?q=Dr.%20K.%20Ramar"> Dr. K. Ramar</a>, <a href="https://publications.waset.org/search?q=A.%20Santhiyaku%20Jeevakumar"> A. Santhiyaku Jeevakumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial expression analysis plays a significant role for human computer interaction. Automatic analysis of human facial expression is still a challenging problem with many applications. In this paper, we propose neuro-fuzzy based automatic facial expression recognition system to recognize the human facial expressions like happy, fear, sad, angry, disgust and surprise. Initially facial image is segmented into three regions from which the uniform Local Binary Pattern (LBP) texture features distributions are extracted and represented as a histogram descriptor. The facial expressions are recognized using Multiple Adaptive Neuro Fuzzy Inference System (MANFIS). The proposed system designed and tested with JAFFE face database. The proposed model reports 94.29% of classification accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Adaptive%20neuro-fuzzy%20inference%20system" title="Adaptive neuro-fuzzy inference system">Adaptive neuro-fuzzy inference system</a>, <a href="https://publications.waset.org/search?q=Facialexpression" title=" Facialexpression"> Facialexpression</a>, <a href="https://publications.waset.org/search?q=Local%20binary%20pattern" title=" Local binary pattern"> Local binary pattern</a>, <a href="https://publications.waset.org/search?q=Uniform%20Histogram" title=" Uniform Histogram"> Uniform Histogram</a> </p> <a href="https://publications.waset.org/13988/human-facial-expression-recognition-using-manfis-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/13988/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/13988/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/13988/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/13988/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/13988/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/13988/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/13988/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/13988/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/13988/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/13988/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/13988.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2103</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">849</span> Facial Expressions Animation and Lip Tracking Using Facial Characteristic Points and Deformable Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hadi%20Seyedarabi">Hadi Seyedarabi</a>, <a href="https://publications.waset.org/search?q=Ali%20Aghagolzadeh"> Ali Aghagolzadeh</a>, <a href="https://publications.waset.org/search?q=Sohrab%20Khanmohammadi"> Sohrab Khanmohammadi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Face and facial expressions play essential roles in interpersonal communication. Most of the current works on the facial expression recognition attempt to recognize a small set of the prototypic expressions such as happy, surprise, anger, sad, disgust and fear. However the most of the human emotions are communicated by changes in one or two of discrete features. In this paper, we develop a facial expressions synthesis system, based on the facial characteristic points (FCP's) tracking in the frontal image sequences. Selected FCP's are automatically tracked using a crosscorrelation based optical flow. The proposed synthesis system uses a simple deformable facial features model with a few set of control points that can be tracked in original facial image sequences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Deformable%20face%20model" title="Deformable face model">Deformable face model</a>, <a href="https://publications.waset.org/search?q=facial%20animation" title=" facial animation"> facial animation</a>, <a href="https://publications.waset.org/search?q=facialcharacteristic%20points" title=" facialcharacteristic points"> facialcharacteristic points</a>, <a href="https://publications.waset.org/search?q=optical%20flow." title=" optical flow."> optical flow.</a> </p> <a href="https://publications.waset.org/14909/facial-expressions-animation-and-lip-tracking-using-facial-characteristic-points-and-deformable-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/14909/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/14909/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/14909/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/14909/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/14909/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/14909/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/14909/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/14909/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/14909/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/14909/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/14909.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1633</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">848</span> Face Texture Reconstruction for Illumination Variant Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Pengfei%20Xiong">Pengfei Xiong</a>, <a href="https://publications.waset.org/search?q=Lei%20Huang"> Lei Huang</a>, <a href="https://publications.waset.org/search?q=Changping%20Liu"> Changping Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In illumination variant face recognition, existing methods extracting face albedo as light normalized image may lead to loss of extensive facial details, with light template discarded. To improve that, a novel approach for realistic facial texture reconstruction by combining original image and albedo image is proposed. First, light subspaces of different identities are established from the given reference face images; then by projecting the original and albedo image into each light subspace respectively, texture reference images with corresponding lighting are reconstructed and two texture subspaces are formed. According to the projections in texture subspaces, facial texture with normal light can be synthesized. Due to the combination of original image, facial details can be preserved with face albedo. In addition, image partition is applied to improve the synthesization performance. Experiments on Yale B and CMUPIE databases demonstrate that this algorithm outperforms the others both in image representation and in face recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=texture%20reconstruction" title="texture reconstruction">texture reconstruction</a>, <a href="https://publications.waset.org/search?q=illumination" title=" illumination"> illumination</a>, <a href="https://publications.waset.org/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/search?q=subspaces" title=" subspaces"> subspaces</a> </p> <a href="https://publications.waset.org/11739/face-texture-reconstruction-for-illumination-variant-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/11739/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/11739/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/11739/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/11739/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/11739/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/11739/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/11739/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/11739/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/11739/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/11739/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/11739.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1482</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">847</span> Face Recognition Using Discrete Orthogonal Hahn Moments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Fatima%20Akhmedova">Fatima Akhmedova</a>, <a href="https://publications.waset.org/search?q=Simon%20Liao"> Simon Liao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the most critical decision points in the design of a face recognition system is the choice of an appropriate face representation. Effective feature descriptors are expected to convey sufficient, invariant and non-redundant facial information. In this work we propose a set of Hahn moments as a new approach for feature description. Hahn moments have been widely used in image analysis due to their invariance, nonredundancy and the ability to extract features either globally and locally. To assess the applicability of Hahn moments to Face Recognition we conduct two experiments on the Olivetti Research Laboratory (ORL) database and University of Notre-Dame (UND) X1 biometric collection. Fusion of the global features along with the features from local facial regions are used as an input for the conventional k-NN classifier. The method reaches an accuracy of 93% of correctly recognized subjects for the ORL database and 94% for the UND database. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20Recognition" title="Face Recognition">Face Recognition</a>, <a href="https://publications.waset.org/search?q=Hahn%20moments" title=" Hahn moments"> Hahn moments</a>, <a href="https://publications.waset.org/search?q=Recognition-by-parts" title=" Recognition-by-parts"> Recognition-by-parts</a>, <a href="https://publications.waset.org/search?q=Time-lapse." title=" Time-lapse."> Time-lapse.</a> </p> <a href="https://publications.waset.org/10002256/face-recognition-using-discrete-orthogonal-hahn-moments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10002256/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10002256/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10002256/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10002256/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10002256/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10002256/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10002256/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10002256/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10002256/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10002256/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10002256.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1777</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">846</span> Methods of Geodesic Distance in Two-Dimensional Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Rachid%20Ahdid">Rachid Ahdid</a>, <a href="https://publications.waset.org/search?q=Said%20Safi"> Said Safi</a>, <a href="https://publications.waset.org/search?q=Bouzid%20Manaut"> Bouzid Manaut</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, we present a comparative study of three methods of 2D face recognition system such as: Iso-Geodesic Curves (IGC), Geodesic Distance (GD) and Geodesic-Intensity Histogram (GIH). These approaches are based on computing of geodesic distance between points of facial surface and between facial curves. In this study we represented the image at gray level as a 2D surface in a 3D space, with the third coordinate proportional to the intensity values of pixels. In the classifying step, we use: Neural Networks (NN), K-Nearest Neighbor (KNN) and Support Vector Machines (SVM). The images used in our experiments are from two wellknown databases of face images ORL and YaleB. ORL data base was used to evaluate the performance of methods under conditions where the pose and sample size are varied, and the database YaleB was used to examine the performance of the systems when the facial expressions and lighting are varied.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=2D%20face%20recognition" title="2D face recognition">2D face recognition</a>, <a href="https://publications.waset.org/search?q=Geodesic%20distance" title=" Geodesic distance"> Geodesic distance</a>, <a href="https://publications.waset.org/search?q=Iso-Geodesic%20Curves" title=" Iso-Geodesic Curves"> Iso-Geodesic Curves</a>, <a href="https://publications.waset.org/search?q=Geodesic-Intensity%20Histogram" title=" Geodesic-Intensity Histogram"> Geodesic-Intensity Histogram</a>, <a href="https://publications.waset.org/search?q=facial%20surface" title=" facial surface"> facial surface</a>, <a href="https://publications.waset.org/search?q=Neural%20Networks" title=" Neural Networks"> Neural Networks</a>, <a href="https://publications.waset.org/search?q=K-Nearest%20Neighbor" title=" K-Nearest Neighbor"> K-Nearest Neighbor</a>, <a href="https://publications.waset.org/search?q=Support%20Vector%20Machines." title=" Support Vector Machines."> Support Vector Machines.</a> </p> <a href="https://publications.waset.org/10002616/methods-of-geodesic-distance-in-two-dimensional-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10002616/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10002616/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10002616/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10002616/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10002616/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10002616/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10002616/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10002616/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10002616/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10002616/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10002616.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1815</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">845</span> An Improved Face Recognition Algorithm Using Histogram-Based Features in Spatial and Frequency Domains</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Qiu%20Chen">Qiu Chen</a>, <a href="https://publications.waset.org/search?q=Koji%20Kotani"> Koji Kotani</a>, <a href="https://publications.waset.org/search?q=Feifei%20Lee"> Feifei Lee</a>, <a href="https://publications.waset.org/search?q=Tadahiro%20Ohmi"> Tadahiro Ohmi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, we propose an improved face recognition algorithm using histogram-based features in spatial and frequency domains. For adding spatial information of the face to improve recognition performance, a region-division (RD) method is utilized. The facial area is firstly divided into several regions, then feature vectors of each facial part are generated by Binary Vector Quantization (BVQ) histogram using DCT coefficients in low frequency domains, as well as Local Binary Pattern (LBP) histogram in spatial domain. Recognition results with different regions are first obtained separately and then fused by weighted averaging. Publicly available ORL database is used for the evaluation of our proposed algorithm, which is consisted of 40 subjects with 10 images per subject containing variations in lighting, posing, and expressions. It is demonstrated that face recognition using RD method can achieve much higher recognition rate.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=Binary%20vector%20quantization%20%28BVQ%29" title=" Binary vector quantization (BVQ)"> Binary vector quantization (BVQ)</a>, <a href="https://publications.waset.org/search?q=Local%20Binary%20Patterns%20%28LBP%29" title=" Local Binary Patterns (LBP)"> Local Binary Patterns (LBP)</a>, <a href="https://publications.waset.org/search?q=DCT%20coefficients." title=" DCT coefficients."> DCT coefficients.</a> </p> <a href="https://publications.waset.org/10003903/an-improved-face-recognition-algorithm-using-histogram-based-features-in-spatial-and-frequency-domains" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10003903/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10003903/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10003903/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10003903/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10003903/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10003903/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10003903/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10003903/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10003903/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10003903/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10003903.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1621</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">844</span> Walsh-Hadamard Transform for Facial Feature Extraction in Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=M.%20Hassan">M. Hassan</a>, <a href="https://publications.waset.org/search?q=I.%20Osman"> I. Osman</a>, <a href="https://publications.waset.org/search?q=M.%20Yahia"> M. Yahia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This Paper proposes a new facial feature extraction approach, Wash-Hadamard Transform (WHT). This approach is based on correlation between local pixels of the face image. Its primary advantage is the simplicity of its computation. The paper compares the proposed approach, WHT, which was traditionally used in data compression with two other known approaches: the Principal Component Analysis (PCA) and the Discrete Cosine Transform (DCT) using the face database of Olivetti Research Laboratory (ORL). In spite of its simple computation, the proposed algorithm (WHT) gave very close results to those obtained by the PCA and DCT. This paper initiates the research into WHT and the family of frequency transforms and examines their suitability for feature extraction in face recognition applications.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20Recognition" title="Face Recognition">Face Recognition</a>, <a href="https://publications.waset.org/search?q=Facial%20Feature%20Extraction" title=" Facial Feature Extraction"> Facial Feature Extraction</a>, <a href="https://publications.waset.org/search?q=Principal%20Component%20Analysis" title=" Principal Component Analysis"> Principal Component Analysis</a>, <a href="https://publications.waset.org/search?q=and%20Discrete%20Cosine%20Transform" title=" and Discrete Cosine Transform"> and Discrete Cosine Transform</a>, <a href="https://publications.waset.org/search?q=Wash-Hadamard%20Transform." title=" Wash-Hadamard Transform."> Wash-Hadamard Transform.</a> </p> <a href="https://publications.waset.org/2475/walsh-hadamard-transform-for-facial-feature-extraction-in-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/2475/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/2475/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/2475/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/2475/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/2475/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/2475/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/2475/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/2475/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/2475/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/2475/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/2475.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2571</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">843</span> Improved Feature Extraction Technique for Handling Occlusion in Automatic Facial Expression Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Khadijat%20T.%20Bamigbade">Khadijat T. Bamigbade</a>, <a href="https://publications.waset.org/search?q=Olufade%20F.%20W.%20Onifade"> Olufade F. W. Onifade</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The field of automatic facial expression analysis has been an active research area in the last two decades. Its vast applicability in various domains has drawn so much attention into developing techniques and dataset that mirror real life scenarios. Many techniques such as Local Binary Patterns and its variants (CLBP, LBP-TOP) and lately, deep learning techniques, have been used for facial expression recognition. However, the problem of occlusion has not been sufficiently handled, making their results not applicable in real life situations. This paper develops a simple, yet highly efficient method tagged Local Binary Pattern-Histogram of Gradient (LBP-HOG) with occlusion detection in face image, using a multi-class SVM for Action Unit and in turn expression recognition. Our method was evaluated on three publicly available datasets which are JAFFE, CK, SFEW. Experimental results showed that our approach performed considerably well when compared with state-of-the-art algorithms and gave insight to occlusion detection as a key step to handling expression in wild.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Automatic%20facial%20expression%20analysis" title="Automatic facial expression analysis">Automatic facial expression analysis</a>, <a href="https://publications.waset.org/search?q=local%20binary%20pattern" title=" local binary pattern"> local binary pattern</a>, <a href="https://publications.waset.org/search?q=LBP-HOG" title=" LBP-HOG"> LBP-HOG</a>, <a href="https://publications.waset.org/search?q=occlusion%20detection." title=" occlusion detection."> occlusion detection.</a> </p> <a href="https://publications.waset.org/10010270/improved-feature-extraction-technique-for-handling-occlusion-in-automatic-facial-expression-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10010270/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10010270/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10010270/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10010270/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10010270/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10010270/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10010270/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10010270/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10010270/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10010270/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10010270.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">783</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">842</span> Face Localization and Recognition in Varied Expressions and Illumination</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hui-Yu%20Huang">Hui-Yu Huang</a>, <a href="https://publications.waset.org/search?q=Shih-Hang%20Hsu"> Shih-Hang Hsu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, we propose a robust scheme to work face alignment and recognition under various influences. For face representation, illumination influence and variable expressions are the important factors, especially the accuracy of facial localization and face recognition. In order to solve those of factors, we propose a robust approach to overcome these problems. This approach consists of two phases. One phase is preprocessed for face images by means of the proposed illumination normalization method. The location of facial features can fit more efficient and fast based on the proposed image blending. On the other hand, based on template matching, we further improve the active shape models (called as IASM) to locate the face shape more precise which can gain the recognized rate in the next phase. The other phase is to process feature extraction by using principal component analysis and face recognition by using support vector machine classifiers. The results show that this proposed method can obtain good facial localization and face recognition with varied illumination and local distortion.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Gabor%20filter" title="Gabor filter">Gabor filter</a>, <a href="https://publications.waset.org/search?q=improved%20active%20shape%20model%20%28IASM%29" title=" improved active shape model (IASM)"> improved active shape model (IASM)</a>, <a href="https://publications.waset.org/search?q=principal%20component%20analysis%20%28PCA%29" title=" principal component analysis (PCA)"> principal component analysis (PCA)</a>, <a href="https://publications.waset.org/search?q=face%20alignment" title=" face alignment"> face alignment</a>, <a href="https://publications.waset.org/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/search?q=support%20vector%20machine%20%28SVM%29" title=" support vector machine (SVM)"> support vector machine (SVM)</a> </p> <a href="https://publications.waset.org/5662/face-localization-and-recognition-in-varied-expressions-and-illumination" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5662/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5662/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5662/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5662/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5662/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5662/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5662/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5662/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5662/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5662/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5662.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1491</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">841</span> Facial Emotion Recognition with Convolutional Neural Network Based Architecture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Koray%20U.%20Erbas">Koray U. Erbas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Neural networks are appealing for many applications since they are able to learn complex non-linear relationships between input and output data. As the number of neurons and layers in a neural network increase, it is possible to represent more complex relationships with automatically extracted features. Nowadays Deep Neural Networks (DNNs) are widely used in Computer Vision problems such as; classification, object detection, segmentation image editing etc. In this work, Facial Emotion Recognition task is performed by proposed Convolutional Neural Network (CNN)-based DNN architecture using FER2013 Dataset. Moreover, the effects of different hyperparameters (activation function, kernel size, initializer, batch size and network size) are investigated and ablation study results for Pooling Layer, Dropout and Batch Normalization are presented.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Convolutional%20Neural%20Network" title="Convolutional Neural Network">Convolutional Neural Network</a>, <a href="https://publications.waset.org/search?q=Deep%20Learning" title=" Deep Learning"> Deep Learning</a>, <a href="https://publications.waset.org/search?q=Deep%20Learning%20Based%20FER" title=" Deep Learning Based FER"> Deep Learning Based FER</a>, <a href="https://publications.waset.org/search?q=Facial%20Emotion%20Recognition." title=" Facial Emotion Recognition."> Facial Emotion Recognition.</a> </p> <a href="https://publications.waset.org/10011791/facial-emotion-recognition-with-convolutional-neural-network-based-architecture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10011791/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10011791/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10011791/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10011791/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10011791/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10011791/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10011791/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10011791/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10011791/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10011791/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10011791.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1374</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">840</span> Make Up Flash: Web Application for the Improvement of Physical Appearance in Images Based on Recognition Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Stefania%20Arguelles%20Reyes">Stefania Arguelles Reyes</a>, <a href="https://publications.waset.org/search?q=Octavio%20Jos%C3%A9%20Salcedo%20Parra"> Octavio Jos茅 Salcedo Parra</a>, <a href="https://publications.waset.org/search?q=Alberto%20Acosta%20L%C3%B3pez"> Alberto Acosta L贸pez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper presents a web application for the improvement of images through recognition. The web application is based on the analysis of picture-based recognition methods that allow an improvement on the physical appearance of people posting in social networks. The basis relies on the study of tools that can correct or improve some features of the face, with the help of a wide collection of user images taken as reference to build a facial profile. Automatic facial profiling can be achieved with a deeper study of the Object Detection Library. It was possible to improve the initial images with the help of MATLAB and its filtering functions. The user can have a direct interaction with the program and manually adjust his preferences.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Application" title="Application">Application</a>, <a href="https://publications.waset.org/search?q=MATLAB" title=" MATLAB"> MATLAB</a>, <a href="https://publications.waset.org/search?q=make%20up" title=" make up"> make up</a>, <a href="https://publications.waset.org/search?q=model" title=" model"> model</a>, <a href="https://publications.waset.org/search?q=recognition." title=" recognition."> recognition.</a> </p> <a href="https://publications.waset.org/10010639/make-up-flash-web-application-for-the-improvement-of-physical-appearance-in-images-based-on-recognition-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10010639/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10010639/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10010639/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10010639/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10010639/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10010639/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10010639/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10010639/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10010639/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10010639/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10010639.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">571</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">839</span> Facial Expressions Recognition from Complex Background using Face Context and Adaptively Weighted sub-Pattern PCA</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Md.%20Zahangir%20Alom">Md. Zahangir Alom</a>, <a href="https://publications.waset.org/search?q=Mei-Lan%20Piao"> Mei-Lan Piao</a>, <a href="https://publications.waset.org/search?q=Md.%20Ashraful%20Alam"> Md. Ashraful Alam</a>, <a href="https://publications.waset.org/search?q=Nam%20Kim"> Nam Kim</a>, <a href="https://publications.waset.org/search?q=Jae-Hyeung%20Park"> Jae-Hyeung Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>A new approach for facial expressions recognition based on face context and adaptively weighted sub-pattern PCA (Aw-SpPCA) has been presented in this paper. The facial region and others part of the body have been segmented from the complex environment based on skin color model. An algorithm has been proposed to accurate detection of face region from the segmented image based on constant ratio of height and width of face (&delta;= 1.618). The paper also discusses on new concept to detect the eye and mouth position. The desired part of the face has been cropped to analysis the expression of a person. Unlike PCA based on a whole image pattern, Aw-SpPCA operates directly on its sub patterns partitioned from an original whole pattern and separately extracts features from them. Aw-SpPCA can adaptively compute the contributions of each part and a classification task in order to enhance the robustness to both expression and illumination variations. Experiments on single standard face with five types of facial expression database shows that the proposed method is competitive.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Aw-SpPC" title="Aw-SpPC">Aw-SpPC</a>, <a href="https://publications.waset.org/search?q=Expressoin%20Recognition" title=" Expressoin Recognition"> Expressoin Recognition</a>, <a href="https://publications.waset.org/search?q=Face%20context" title=" Face context"> Face context</a>, <a href="https://publications.waset.org/search?q=Face%20Detection" title=" Face Detection"> Face Detection</a>, <a href="https://publications.waset.org/search?q=PCA" title=" PCA"> PCA</a> </p> <a href="https://publications.waset.org/7541/facial-expressions-recognition-from-complex-background-using-face-context-and-adaptively-weighted-sub-pattern-pca" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/7541/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/7541/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/7541/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/7541/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/7541/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/7541/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/7541/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/7541/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/7541/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/7541/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/7541.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1721</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">838</span> A Neuron Model of Facial Recognition and Detection of an Authorized Entity Using Machine Learning System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=J.%20K.%20Adedeji">J. K. Adedeji</a>, <a href="https://publications.waset.org/search?q=M.%20O.%20Oyekanmi"> M. O. Oyekanmi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper has critically examined the use of Machine Learning procedures in curbing unauthorized access into valuable areas of an organization. The use of passwords, pin codes, user&rsquo;s identification in recent times has been partially successful in curbing crimes involving identities, hence the need for the design of a system which incorporates biometric characteristics such as DNA and pattern recognition of variations in facial expressions. The facial model used is the OpenCV library which is based on the use of certain physiological features, the Raspberry Pi 3 module is used to compile the OpenCV library, which extracts and stores the detected faces into the datasets directory through the use of camera. The model is trained with 50 epoch run in the database and recognized by the Local Binary Pattern Histogram (LBPH) recognizer contained in the OpenCV. The training algorithm used by the neural network is back propagation coded using python algorithmic language with 200 epoch runs to identify specific resemblance in the exclusive OR (XOR) output neurons. The research however confirmed that physiological parameters are better effective measures to curb crimes relating to identities.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Biometric%20characters" title="Biometric characters">Biometric characters</a>, <a href="https://publications.waset.org/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/search?q=OpenCV." title=" OpenCV."> OpenCV.</a> </p> <a href="https://publications.waset.org/10009440/a-neuron-model-of-facial-recognition-and-detection-of-an-authorized-entity-using-machine-learning-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10009440/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10009440/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10009440/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10009440/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10009440/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10009440/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10009440/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10009440/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10009440/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10009440/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10009440.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">695</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">837</span> Fusion Classifier for Open-Set Face Recognition with Pose Variations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Gee-Sern%20Jison%20Hsu">Gee-Sern Jison Hsu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>A fusion classifier composed of two modules, one made by a hidden Markov model (HMM) and the other by a support vector machine (SVM), is proposed to recognize faces with pose variations in open-set recognition settings. The HMM module captures the evolution of facial features across a subject-s face using the subject-s facial images only, without referencing to the faces of others. Because of the captured evolutionary process of facial features, the HMM module retains certain robustness against pose variations, yielding low false rejection rates (FRR) for recognizing faces across poses. This is, however, on the price of poor false acceptance rates (FAR) when recognizing other faces because it is built upon withinclass samples only. The SVM module in the proposed model is developed following a special design able to substantially diminish the FAR and further lower down the FRR. The proposed fusion classifier has been evaluated in performance using the CMU PIE database, and proven effective for open-set face recognition with pose variations. Experiments have also shown that it outperforms the face classifier made by HMM or SVM alone.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=open-set%20identification" title=" open-set identification"> open-set identification</a>, <a href="https://publications.waset.org/search?q=hidden%20Markov%20model" title=" hidden Markov model"> hidden Markov model</a>, <a href="https://publications.waset.org/search?q=support%20vector%20machines." title=" support vector machines."> support vector machines.</a> </p> <a href="https://publications.waset.org/5636/fusion-classifier-for-open-set-face-recognition-with-pose-variations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5636/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5636/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5636/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5636/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5636/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5636/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5636/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5636/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5636/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5636/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5636.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1692</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">836</span> A Structural Support Vector Machine Approach for Biometric Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Vishal%20Awasthi">Vishal Awasthi</a>, <a href="https://publications.waset.org/search?q=Atul%20Kumar%20Agnihotri"> Atul Kumar Agnihotri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Face is a non-intrusive strong biometrics for identification of original and dummy facial by different artificial means. Face recognition is extremely important in the contexts of computer vision, psychology, surveillance, pattern recognition, neural network, content based video processing. The availability of a widespread face database is crucial to test the performance of these face recognition algorithms. The openly available face databases include face images with a wide range of poses, illumination, gestures and face occlusions but there is no dummy face database accessible in public domain. This paper presents a face detection algorithm based on the image segmentation in terms of distance from a fixed point and template matching methods. This proposed work is having the most appropriate number of nodal points resulting in most appropriate outcomes in terms of face recognition and detection. The time taken to identify and extract distinctive facial features is improved in the range of 90 to 110 sec. with the increment of efficiency by 3%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=Principal%20Component%20Analysis" title=" Principal Component Analysis"> Principal Component Analysis</a>, <a href="https://publications.waset.org/search?q=PCA" title=" PCA"> PCA</a>, <a href="https://publications.waset.org/search?q=Linear%20Discriminant%20Analysis" title=" Linear Discriminant Analysis"> Linear Discriminant Analysis</a>, <a href="https://publications.waset.org/search?q=LDA" title=" LDA"> LDA</a>, <a href="https://publications.waset.org/search?q=Improved%20Support%0D%0AVector%20Machine" title=" Improved Support Vector Machine"> Improved Support Vector Machine</a>, <a href="https://publications.waset.org/search?q=iSVM" title=" iSVM"> iSVM</a>, <a href="https://publications.waset.org/search?q=elastic%20bunch%20mapping%20technique." title=" elastic bunch mapping technique."> elastic bunch mapping technique.</a> </p> <a href="https://publications.waset.org/10011989/a-structural-support-vector-machine-approach-for-biometric-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10011989/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10011989/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10011989/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10011989/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10011989/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10011989/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10011989/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10011989/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10011989/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10011989/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10011989.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">494</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">835</span> Curvelet Features with Mouth and Face Edge Ratios for Facial Expression Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=S.%20Kherchaoui">S. Kherchaoui</a>, <a href="https://publications.waset.org/search?q=A.%20Houacine"> A. Houacine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper presents a facial expression recognition system. It performs identification and classification of the seven basic expressions; happy, surprise, fear, disgust, sadness, anger, and neutral states. It consists of three main parts. The first one is the detection of a face and the corresponding facial features to extract the most expressive portion of the face, followed by a normalization of the region of interest. Then calculus of curvelet coefficients is performed with dimensionality reduction through principal component analysis. The resulting coefficients are combined with two ratios; mouth ratio and face edge ratio to constitute the whole feature vector. The third step is the classification of the emotional state using the SVM method in the feature space.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Facial%20expression%20identification" title="Facial expression identification">Facial expression identification</a>, <a href="https://publications.waset.org/search?q=curvelet%20coefficients" title=" curvelet coefficients"> curvelet coefficients</a>, <a href="https://publications.waset.org/search?q=support%20vector%20machine%20%28SVM%29." title=" support vector machine (SVM)."> support vector machine (SVM).</a> </p> <a href="https://publications.waset.org/9998501/curvelet-features-with-mouth-and-face-edge-ratios-for-facial-expression-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9998501/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9998501/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9998501/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9998501/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9998501/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9998501/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9998501/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9998501/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9998501/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9998501/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9998501.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1842</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">834</span> Biometric Methods and Implementation of Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Parvinder%20S.%20Sandhu">Parvinder S. Sandhu</a>, <a href="https://publications.waset.org/search?q=Iqbaldeep%20Kaur"> Iqbaldeep Kaur</a>, <a href="https://publications.waset.org/search?q=Amit%20Verma"> Amit Verma</a>, <a href="https://publications.waset.org/search?q=Samriti%20Jindal"> Samriti Jindal</a>, <a href="https://publications.waset.org/search?q=Shailendra%20Singh"> Shailendra Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometric measures of one kind or another have been used to identify people since ancient times, with handwritten signatures, facial features, and fingerprints being the traditional methods. Of late, Systems have been built that automate the task of recognition, using these methods and newer ones, such as hand geometry, voiceprints and iris patterns. These systems have different strengths and weaknesses. This work is a two-section composition. In the starting section, we present an analytical and comparative study of common biometric techniques. The performance of each of them has been viewed and then tabularized as a result. The latter section involves the actual implementation of the techniques under consideration that has been done using a state of the art tool called, MATLAB. This tool aids to effectively portray the corresponding results and effects. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Matlab" title="Matlab">Matlab</a>, <a href="https://publications.waset.org/search?q=Recognition" title=" Recognition"> Recognition</a>, <a href="https://publications.waset.org/search?q=Facial%20Vectors" title=" Facial Vectors"> Facial Vectors</a>, <a href="https://publications.waset.org/search?q=Functions." title=" Functions."> Functions.</a> </p> <a href="https://publications.waset.org/10117/biometric-methods-and-implementation-of-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10117/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10117/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10117/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10117/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10117/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10117/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10117/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10117/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10117/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10117/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10117.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3192</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">833</span> Automatic Facial Skin Segmentation Using Possibilistic C-Means Algorithm for Evaluation of Facial Surgeries</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Elham%20Alaee">Elham Alaee</a>, <a href="https://publications.waset.org/search?q=Mousa%20Shamsi"> Mousa Shamsi</a>, <a href="https://publications.waset.org/search?q=Hossein%20Ahmadi"> Hossein Ahmadi</a>, <a href="https://publications.waset.org/search?q=Soroosh%20Nazem"> Soroosh Nazem</a>, <a href="https://publications.waset.org/search?q=Mohammadhossein%20Sedaaghi"> Mohammadhossein Sedaaghi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Human face has a fundamental role in the appearance of individuals. So the importance of facial surgeries is undeniable. Thus, there is a need for the appropriate and accurate facial skin segmentation in order to extract different features. Since Fuzzy CMeans (FCM) clustering algorithm doesn&rsquo;t work appropriately for noisy images and outliers, in this paper we exploit Possibilistic CMeans (PCM) algorithm in order to segment the facial skin. For this purpose, first, we convert facial images from RGB to YCbCr color space. To evaluate performance of the proposed algorithm, the database of Sahand University of Technology, Tabriz, Iran was used. In order to have a better understanding from the proposed algorithm; FCM and Expectation-Maximization (EM) algorithms are also used for facial skin segmentation. The proposed method shows better results than the other segmentation methods. Results include misclassification error (0.032) and the region&rsquo;s area error (0.045) for the proposed algorithm.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Facial%20image" title="Facial image">Facial image</a>, <a href="https://publications.waset.org/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/search?q=PCM" title=" PCM"> PCM</a>, <a href="https://publications.waset.org/search?q=FCM" title=" FCM"> FCM</a>, <a href="https://publications.waset.org/search?q=skin%20error" title=" skin error"> skin error</a>, <a href="https://publications.waset.org/search?q=facial%20surgery." title=" facial surgery."> facial surgery.</a> </p> <a href="https://publications.waset.org/9998526/automatic-facial-skin-segmentation-using-possibilistic-c-means-algorithm-for-evaluation-of-facial-surgeries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9998526/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9998526/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9998526/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9998526/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9998526/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9998526/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9998526/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9998526/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9998526/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9998526/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9998526.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1990</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">832</span> Multimodal Database of Emotional Speech, Video and Gestures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Tomasz%20Sapi%C5%84ski">Tomasz Sapi艅ski</a>, <a href="https://publications.waset.org/search?q=Dorota%20Kami%C5%84ska"> Dorota Kami艅ska</a>, <a href="https://publications.waset.org/search?q=Adam%20Pelikant"> Adam Pelikant</a>, <a href="https://publications.waset.org/search?q=Egils%20Avots"> Egils Avots</a>, <a href="https://publications.waset.org/search?q=Cagri%20Ozcinar"> Cagri Ozcinar</a>, <a href="https://publications.waset.org/search?q=Gholamreza%20Anbarjafari"> Gholamreza Anbarjafari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> People express emotions through different modalities. Integration of verbal and non-verbal communication channels creates a system in which the message is easier to understand. Expanding the focus to several expression forms can facilitate research on emotion recognition as well as human-machine interaction. In this article, the authors present a Polish emotional database composed of three modalities: facial expressions, body movement and gestures, and speech. The corpora contains recordings registered in studio conditions, acted out by 16 professional actors (8 male and 8 female). The data is labeled with six basic emotions categories, according to Ekman&rsquo;s emotion categories. To check the quality of performance, all recordings are evaluated by experts and volunteers. The database is available to academic community and might be useful in the study on audio-visual emotion recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Body%20movement" title="Body movement">Body movement</a>, <a href="https://publications.waset.org/search?q=emotion%20recognition" title=" emotion recognition"> emotion recognition</a>, <a href="https://publications.waset.org/search?q=emotional%0D%0Acorpus" title=" emotional corpus"> emotional corpus</a>, <a href="https://publications.waset.org/search?q=facial%20expressions" title=" facial expressions"> facial expressions</a>, <a href="https://publications.waset.org/search?q=gestures" title=" gestures"> gestures</a>, <a href="https://publications.waset.org/search?q=multimodal%20database" title=" multimodal database"> multimodal database</a>, <a href="https://publications.waset.org/search?q=speech." title=" speech."> speech.</a> </p> <a href="https://publications.waset.org/10009589/multimodal-database-of-emotional-speech-video-and-gestures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10009589/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10009589/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10009589/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10009589/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10009589/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10009589/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10009589/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10009589/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10009589/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10009589/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10009589.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1126</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">831</span> SVM-based Multiview Face Recognition by Generalization of Discriminant Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Dakshina%20Ranjan%20Kisku">Dakshina Ranjan Kisku</a>, <a href="https://publications.waset.org/search?q=Hunny%20Mehrotra"> Hunny Mehrotra</a>, <a href="https://publications.waset.org/search?q=Jamuna%20Kanta%20Sing"> Jamuna Kanta Sing</a>, <a href="https://publications.waset.org/search?q=Phalguni%20Gupta"> Phalguni Gupta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orientation. Gabor face representation captures substantial amount of variations of the face instances that often occurs due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images of rotated profile views produce Gabor faces with high dimensional features vectors. Canonical covariate is then used to Gabor faces to reduce the high dimensional feature spaces into low dimensional subspaces. Finally, support vector machines are trained with canonical sub-spaces that contain reduced set of features and perform recognition task. The proposed system is evaluated with UMIST face database. The experiment results demonstrate the efficiency and robustness of the proposed system with high recognition rates.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Biometrics" title="Biometrics">Biometrics</a>, <a href="https://publications.waset.org/search?q=Multiview%20face%20Recognition" title=" Multiview face Recognition"> Multiview face Recognition</a>, <a href="https://publications.waset.org/search?q=Gaborwavelets" title=" Gaborwavelets"> Gaborwavelets</a>, <a href="https://publications.waset.org/search?q=LDA" title=" LDA"> LDA</a>, <a href="https://publications.waset.org/search?q=SVM." title=" SVM."> SVM.</a> </p> <a href="https://publications.waset.org/6660/svm-based-multiview-face-recognition-by-generalization-of-discriminant-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/6660/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/6660/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/6660/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/6660/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/6660/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/6660/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/6660/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/6660/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/6660/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/6660/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/6660.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1504</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">830</span> 3D Face Recognition Using Modified PCA Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Omid%20Gervei">Omid Gervei</a>, <a href="https://publications.waset.org/search?q=Ahmad%20Ayatollahi"> Ahmad Ayatollahi</a>, <a href="https://publications.waset.org/search?q=Navid%20Gervei"> Navid Gervei</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we present an approach for 3D face recognition based on extracting principal components of range images by utilizing modified PCA methods namely 2DPCA and bidirectional 2DPCA also known as (2D) 2 PCA.A preprocessing stage was implemented on the images to smooth them using median and Gaussian filtering. In the normalization stage we locate the nose tip to lay it at the center of images then crop each image to a standard size of 100*100. In the face recognition stage we extract the principal component of each image using both 2DPCA and (2D) 2 PCA. Finally, we use Euclidean distance to measure the minimum distance between a given test image to the training images in the database. We also compare the result of using both methods. The best result achieved by experiments on a public face database shows that 83.3 percent is the rate of face recognition for a random facial expression. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=3D%20face%20recognition" title="3D face recognition">3D face recognition</a>, <a href="https://publications.waset.org/search?q=2DPCA" title=" 2DPCA"> 2DPCA</a>, <a href="https://publications.waset.org/search?q=%282D%29%202%20PCA" title=" (2D) 2 PCA"> (2D) 2 PCA</a>, <a href="https://publications.waset.org/search?q=Rangeimage" title=" Rangeimage"> Rangeimage</a> </p> <a href="https://publications.waset.org/5789/3d-face-recognition-using-modified-pca-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5789/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5789/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5789/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5789/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5789/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5789/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5789/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5789/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5789/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5789/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5789.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">3066</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=facial%20recognition&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=facial%20recognition&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=facial%20recognition&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=facial%20recognition&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=facial%20recognition&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=facial%20recognition&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=facial%20recognition&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=facial%20recognition&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=facial%20recognition&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=facial%20recognition&amp;page=28">28</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=facial%20recognition&amp;page=29">29</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=facial%20recognition&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10