CINXE.COM
Search results for: Facial Features Extraction
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: Facial Features Extraction</title> <meta name="description" content="Search results for: Facial Features Extraction"> <meta name="keywords" content="Facial Features Extraction"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="Facial Features Extraction" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="Facial Features Extraction"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2201</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: Facial Features Extraction</h1> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2201</span> Fast Facial Feature Extraction and Matching with Artificial Face Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Y.%20H.%20Tsai">Y. H. Tsai</a>, <a href="https://publications.waset.org/search?q=Y.%20W.%20Chen"> Y. W. Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial features are frequently used to represent local properties of a human face image in computer vision applications. In this paper, we present a fast algorithm that can extract the facial features online such that they can give a satisfying representation of a face image. It includes one step for a coarse detection of each facial feature by AdaBoost and another one to increase the accuracy of the found points by Active Shape Models (ASM) in the regions of interest. The resulted facial features are evaluated by matching with artificial face models in the applications of physiognomy. The distance measure between the features and those in the fate models from the database is carried out by means of the Hausdorff distance. In the experiment, the proposed method shows the efficient performance in facial feature extractions and online system of physiognomy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Facial%20feature%20extraction" title="Facial feature extraction">Facial feature extraction</a>, <a href="https://publications.waset.org/search?q=AdaBoost" title=" AdaBoost"> AdaBoost</a>, <a href="https://publications.waset.org/search?q=Active%20shapemodel" title=" Active shapemodel"> Active shapemodel</a>, <a href="https://publications.waset.org/search?q=Hausdorff%20distance" title=" Hausdorff distance"> Hausdorff distance</a> </p> <a href="https://publications.waset.org/5776/fast-facial-feature-extraction-and-matching-with-artificial-face-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5776/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5776/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5776/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5776/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5776/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5776/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5776/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5776/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5776/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5776/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5776.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1812</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2200</span> Scale-Space Volume Descriptors for Automatic 3D Facial Feature Extraction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Daniel%20Chen">Daniel Chen</a>, <a href="https://publications.waset.org/search?q=George%20Mamic"> George Mamic</a>, <a href="https://publications.waset.org/search?q=Clinton%20Fookes"> Clinton Fookes</a>, <a href="https://publications.waset.org/search?q=Sridha%20Sridharan"> Sridha Sridharan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>An automatic method for the extraction of feature points for face based applications is proposed. The system is based upon volumetric feature descriptors, which in this paper has been extended to incorporate scale space. The method is robust to noise and has the ability to extract local and holistic features simultaneously from faces stored in a database. Extracted features are stable over a range of faces, with results indicating that in terms of intra-ID variability, the technique has the ability to outperform manual landmarking.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Scale%20space%20volume%20descriptor" title="Scale space volume descriptor">Scale space volume descriptor</a>, <a href="https://publications.waset.org/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/search?q=3D%20facial%20landmarking" title=" 3D facial landmarking"> 3D facial landmarking</a> </p> <a href="https://publications.waset.org/1712/scale-space-volume-descriptors-for-automatic-3d-facial-feature-extraction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/1712/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/1712/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/1712/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/1712/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/1712/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/1712/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/1712/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/1712/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/1712/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/1712/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/1712.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1508</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2199</span> Optimized Facial Features-based Age Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Md.%20Zahangir%20Alom">Md. Zahangir Alom</a>, <a href="https://publications.waset.org/search?q=Mei-Lan%20Piao"> Mei-Lan Piao</a>, <a href="https://publications.waset.org/search?q=Md.%20Shariful%20Islam"> Md. Shariful Islam</a>, <a href="https://publications.waset.org/search?q=Nam%20Kim"> Nam Kim</a>, <a href="https://publications.waset.org/search?q=Jae-Hyeung%20Park"> Jae-Hyeung Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The evaluation and measurement of human body dimensions are achieved by physical anthropometry. This research was conducted in view of the importance of anthropometric indices of the face in forensic medicine, surgery, and medical imaging. The main goal of this research is to optimization of facial feature point by establishing a mathematical relationship among facial features and used optimize feature points for age classification. Since selected facial feature points are located to the area of mouth, nose, eyes and eyebrow on facial images, all desire facial feature points are extracted accurately. According this proposes method; sixteen Euclidean distances are calculated from the eighteen selected facial feature points vertically as well as horizontally. The mathematical relationships among horizontal and vertical distances are established. Moreover, it is also discovered that distances of the facial feature follows a constant ratio due to age progression. The distances between the specified features points increase with respect the age progression of a human from his or her childhood but the ratio of the distances does not change (d = 1 .618 ) . Finally, according to the proposed mathematical relationship four independent feature distances related to eight feature points are selected from sixteen distances and eighteen feature point-s respectively. These four feature distances are used for classification of age using Support Vector Machine (SVM)-Sequential Minimal Optimization (SMO) algorithm and shown around 96 % accuracy. Experiment result shows the proposed system is effective and accurate for age classification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=3D%20Face%20Model" title="3D Face Model">3D Face Model</a>, <a href="https://publications.waset.org/search?q=Face%20Anthropometrics" title=" Face Anthropometrics"> Face Anthropometrics</a>, <a href="https://publications.waset.org/search?q=Facial%20Features%0AExtraction" title=" Facial Features Extraction"> Facial Features Extraction</a>, <a href="https://publications.waset.org/search?q=Feature%20distances" title=" Feature distances"> Feature distances</a>, <a href="https://publications.waset.org/search?q=SVM-SMO" title=" SVM-SMO"> SVM-SMO</a> </p> <a href="https://publications.waset.org/6689/optimized-facial-features-based-age-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/6689/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/6689/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/6689/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/6689/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/6689/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/6689/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/6689/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/6689/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/6689/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/6689/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/6689.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2047</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2198</span> Facial Expressions Animation and Lip Tracking Using Facial Characteristic Points and Deformable Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hadi%20Seyedarabi">Hadi Seyedarabi</a>, <a href="https://publications.waset.org/search?q=Ali%20Aghagolzadeh"> Ali Aghagolzadeh</a>, <a href="https://publications.waset.org/search?q=Sohrab%20Khanmohammadi"> Sohrab Khanmohammadi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Face and facial expressions play essential roles in interpersonal communication. Most of the current works on the facial expression recognition attempt to recognize a small set of the prototypic expressions such as happy, surprise, anger, sad, disgust and fear. However the most of the human emotions are communicated by changes in one or two of discrete features. In this paper, we develop a facial expressions synthesis system, based on the facial characteristic points (FCP's) tracking in the frontal image sequences. Selected FCP's are automatically tracked using a crosscorrelation based optical flow. The proposed synthesis system uses a simple deformable facial features model with a few set of control points that can be tracked in original facial image sequences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Deformable%20face%20model" title="Deformable face model">Deformable face model</a>, <a href="https://publications.waset.org/search?q=facial%20animation" title=" facial animation"> facial animation</a>, <a href="https://publications.waset.org/search?q=facialcharacteristic%20points" title=" facialcharacteristic points"> facialcharacteristic points</a>, <a href="https://publications.waset.org/search?q=optical%20flow." title=" optical flow."> optical flow.</a> </p> <a href="https://publications.waset.org/14909/facial-expressions-animation-and-lip-tracking-using-facial-characteristic-points-and-deformable-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/14909/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/14909/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/14909/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/14909/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/14909/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/14909/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/14909/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/14909/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/14909/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/14909/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/14909.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1633</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2197</span> Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Vesna%20Kirandziska">Vesna Kirandziska</a>, <a href="https://publications.waset.org/search?q=Nevena%20Ackovska"> Nevena Ackovska</a>, <a href="https://publications.waset.org/search?q=Ana%20Madevska%20Bogdanova"> Ana Madevska Bogdanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Emotion%20recognition" title="Emotion recognition">Emotion recognition</a>, <a href="https://publications.waset.org/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/search?q=machine%20learning." title=" machine learning. "> machine learning. </a> </p> <a href="https://publications.waset.org/10004221/comparing-emotion-recognition-from-voice-and-facial-data-using-time-invariant-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10004221/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10004221/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10004221/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10004221/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10004221/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10004221/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10004221/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10004221/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10004221/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10004221/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10004221.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2018</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2196</span> A Survey on Facial Feature Points Detection Techniques and Approaches</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Rachid%20Ahdid">Rachid Ahdid</a>, <a href="https://publications.waset.org/search?q=Khaddouj%20Taifi"> Khaddouj Taifi</a>, <a href="https://publications.waset.org/search?q=Said%20Safi"> Said Safi</a>, <a href="https://publications.waset.org/search?q=Bouzid%20Manaut"> Bouzid Manaut</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic detection of facial feature points plays an important role in applications such as facial feature tracking, human-machine interaction and face recognition. The majority of facial feature points detection methods using two-dimensional or three-dimensional data are covered in existing survey papers. In this article chosen approaches to the facial features detection have been gathered and described. This overview focuses on the class of researches exploiting facial feature points detection to represent facial surface for two-dimensional or three-dimensional face. In the conclusion, we discusses advantages and disadvantages of the presented algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Facial%20feature%20points" title="Facial feature points">Facial feature points</a>, <a href="https://publications.waset.org/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/search?q=facial%20feature%0D%0Atracking" title=" facial feature tracking"> facial feature tracking</a>, <a href="https://publications.waset.org/search?q=two-dimensional%20data" title=" two-dimensional data"> two-dimensional data</a>, <a href="https://publications.waset.org/search?q=three-dimensional%20data." title=" three-dimensional data."> three-dimensional data.</a> </p> <a href="https://publications.waset.org/10005826/a-survey-on-facial-feature-points-detection-techniques-and-approaches" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10005826/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10005826/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10005826/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10005826/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10005826/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10005826/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10005826/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10005826/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10005826/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10005826/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10005826.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1681</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2195</span> An Efficient Algorithm for Motion Detection Based Facial Expression Recognition using Optical Flow</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Ahmad%20R.%20Naghsh-Nilchi">Ahmad R. Naghsh-Nilchi</a>, <a href="https://publications.waset.org/search?q=Mohammad%20Roshanzamir"> Mohammad Roshanzamir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the popular methods for recognition of facial expressions such as happiness, sadness and surprise is based on deformation of facial features. Motion vectors which show these deformations can be specified by the optical flow. In this method, for detecting emotions, the resulted set of motion vectors are compared with standard deformation template that caused by facial expressions. In this paper, a new method is introduced to compute the quantity of likeness in order to make decision based on the importance of obtained vectors from an optical flow approach. For finding the vectors, one of the efficient optical flow method developed by Gautama and VanHulle[17] is used. The suggested method has been examined over Cohn-Kanade AU-Coded Facial Expression Database, one of the most comprehensive collections of test images available. The experimental results show that our method could correctly recognize the facial expressions in 94% of case studies. The results also show that only a few number of image frames (three frames) are sufficient to detect facial expressions with rate of success of about 83.3%. This is a significant improvement over the available methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Facial%20expression" title="Facial expression">Facial expression</a>, <a href="https://publications.waset.org/search?q=Facial%20features" title=" Facial features"> Facial features</a>, <a href="https://publications.waset.org/search?q=Optical%20flow" title=" Optical flow"> Optical flow</a>, <a href="https://publications.waset.org/search?q=Motion%20vectors." title="Motion vectors.">Motion vectors.</a> </p> <a href="https://publications.waset.org/6512/an-efficient-algorithm-for-motion-detection-based-facial-expression-recognition-using-optical-flow" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/6512/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/6512/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/6512/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/6512/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/6512/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/6512/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/6512/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/6512/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/6512/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/6512/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/6512.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2376</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2194</span> Reliable Face Alignment Using Two-Stage AAM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sunho%20Ki">Sunho Ki</a>, <a href="https://publications.waset.org/search?q=Daehwan%20Kim"> Daehwan Kim</a>, <a href="https://publications.waset.org/search?q=Seongwon%20Cho"> Seongwon Cho</a>, <a href="https://publications.waset.org/search?q=Sun-Tae%20Chung"> Sun-Tae Chung</a>, <a href="https://publications.waset.org/search?q=Jaemin%20Kim"> Jaemin Kim</a>, <a href="https://publications.waset.org/search?q=Yun-Kwang%20Hong"> Yun-Kwang Hong</a>, <a href="https://publications.waset.org/search?q=Chang%20Joon%20Park"> Chang Joon Park</a>, <a href="https://publications.waset.org/search?q=Dongmin%20Kwon"> Dongmin Kwon</a>, <a href="https://publications.waset.org/search?q=Minhee%20Kang"> Minhee Kang</a>, <a href="https://publications.waset.org/search?q=Yusung%20Kim"> Yusung Kim</a>, <a href="https://publications.waset.org/search?q=Younghan%20Yoon"> Younghan Yoon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> AAM (active appearance model) has been successfully applied to face and facial feature localization. However, its performance is sensitive to initial parameter values. In this paper, we propose a two-stage AAM for robust face alignment, which first fits an inner face-AAM model to the inner facial feature points of the face and then localizes the whole face and facial features by optimizing the whole face-AAM model parameters. Experiments show that the proposed face alignment method using two-stage AAM is more reliable to the background and the head pose than the standard AAM-based face alignment method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=AAM" title="AAM">AAM</a>, <a href="https://publications.waset.org/search?q=Face%20Alignment" title=" Face Alignment"> Face Alignment</a>, <a href="https://publications.waset.org/search?q=Feature%20Extraction" title=" Feature Extraction"> Feature Extraction</a>, <a href="https://publications.waset.org/search?q=PCA" title=" PCA"> PCA</a> </p> <a href="https://publications.waset.org/14248/reliable-face-alignment-using-two-stage-aam" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/14248/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/14248/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/14248/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/14248/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/14248/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/14248/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/14248/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/14248/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/14248/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/14248/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/14248.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1477</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2193</span> Walsh-Hadamard Transform for Facial Feature Extraction in Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=M.%20Hassan">M. Hassan</a>, <a href="https://publications.waset.org/search?q=I.%20Osman"> I. Osman</a>, <a href="https://publications.waset.org/search?q=M.%20Yahia"> M. Yahia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This Paper proposes a new facial feature extraction approach, Wash-Hadamard Transform (WHT). This approach is based on correlation between local pixels of the face image. Its primary advantage is the simplicity of its computation. The paper compares the proposed approach, WHT, which was traditionally used in data compression with two other known approaches: the Principal Component Analysis (PCA) and the Discrete Cosine Transform (DCT) using the face database of Olivetti Research Laboratory (ORL). In spite of its simple computation, the proposed algorithm (WHT) gave very close results to those obtained by the PCA and DCT. This paper initiates the research into WHT and the family of frequency transforms and examines their suitability for feature extraction in face recognition applications.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20Recognition" title="Face Recognition">Face Recognition</a>, <a href="https://publications.waset.org/search?q=Facial%20Feature%20Extraction" title=" Facial Feature Extraction"> Facial Feature Extraction</a>, <a href="https://publications.waset.org/search?q=Principal%20Component%20Analysis" title=" Principal Component Analysis"> Principal Component Analysis</a>, <a href="https://publications.waset.org/search?q=and%20Discrete%20Cosine%20Transform" title=" and Discrete Cosine Transform"> and Discrete Cosine Transform</a>, <a href="https://publications.waset.org/search?q=Wash-Hadamard%20Transform." title=" Wash-Hadamard Transform."> Wash-Hadamard Transform.</a> </p> <a href="https://publications.waset.org/2475/walsh-hadamard-transform-for-facial-feature-extraction-in-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/2475/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/2475/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/2475/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/2475/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/2475/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/2475/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/2475/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/2475/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/2475/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/2475/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/2475.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2571</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2192</span> Automatic Facial Skin Segmentation Using Possibilistic C-Means Algorithm for Evaluation of Facial Surgeries</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Elham%20Alaee">Elham Alaee</a>, <a href="https://publications.waset.org/search?q=Mousa%20Shamsi"> Mousa Shamsi</a>, <a href="https://publications.waset.org/search?q=Hossein%20Ahmadi"> Hossein Ahmadi</a>, <a href="https://publications.waset.org/search?q=Soroosh%20Nazem"> Soroosh Nazem</a>, <a href="https://publications.waset.org/search?q=Mohammadhossein%20Sedaaghi"> Mohammadhossein Sedaaghi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Human face has a fundamental role in the appearance of individuals. So the importance of facial surgeries is undeniable. Thus, there is a need for the appropriate and accurate facial skin segmentation in order to extract different features. Since Fuzzy CMeans (FCM) clustering algorithm doesn’t work appropriately for noisy images and outliers, in this paper we exploit Possibilistic CMeans (PCM) algorithm in order to segment the facial skin. For this purpose, first, we convert facial images from RGB to YCbCr color space. To evaluate performance of the proposed algorithm, the database of Sahand University of Technology, Tabriz, Iran was used. In order to have a better understanding from the proposed algorithm; FCM and Expectation-Maximization (EM) algorithms are also used for facial skin segmentation. The proposed method shows better results than the other segmentation methods. Results include misclassification error (0.032) and the region’s area error (0.045) for the proposed algorithm.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Facial%20image" title="Facial image">Facial image</a>, <a href="https://publications.waset.org/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/search?q=PCM" title=" PCM"> PCM</a>, <a href="https://publications.waset.org/search?q=FCM" title=" FCM"> FCM</a>, <a href="https://publications.waset.org/search?q=skin%20error" title=" skin error"> skin error</a>, <a href="https://publications.waset.org/search?q=facial%20surgery." title=" facial surgery."> facial surgery.</a> </p> <a href="https://publications.waset.org/9998526/automatic-facial-skin-segmentation-using-possibilistic-c-means-algorithm-for-evaluation-of-facial-surgeries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9998526/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9998526/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9998526/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9998526/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9998526/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9998526/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9998526/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9998526/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9998526/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9998526/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9998526.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1990</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2191</span> Robust Face Recognition using AAM and Gabor Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Sanghoon%20Kim">Sanghoon Kim</a>, <a href="https://publications.waset.org/search?q=Sun-Tae%20Chung"> Sun-Tae Chung</a>, <a href="https://publications.waset.org/search?q=Souhwan%20Jung"> Souhwan Jung</a>, <a href="https://publications.waset.org/search?q=Seoungseon%20Jeon"> Seoungseon Jeon</a>, <a href="https://publications.waset.org/search?q=Jaemin%20Kim"> Jaemin Kim</a>, <a href="https://publications.waset.org/search?q=Seongwon%20Cho"> Seongwon Cho</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a face recognition algorithm using AAM and Gabor features. Gabor feature vectors which are well known to be robust with respect to small variations of shape, scaling, rotation, distortion, illumination and poses in images are popularly employed for feature vectors for many object detection and recognition algorithms. EBGM, which is prominent among face recognition algorithms employing Gabor feature vectors, requires localization of facial feature points where Gabor feature vectors are extracted. However, localization method employed in EBGM is based on Gabor jet similarity and is sensitive to initial values. Wrong localization of facial feature points affects face recognition rate. AAM is known to be successfully applied to localization of facial feature points. In this paper, we devise a facial feature point localization method which first roughly estimate facial feature points using AAM and refine facial feature points using Gabor jet similarity-based facial feature localization method with initial points set by the rough facial feature points obtained from AAM, and propose a face recognition algorithm using the devised localization method for facial feature localization and Gabor feature vectors. It is observed through experiments that such a cascaded localization method based on both AAM and Gabor jet similarity is more robust than the localization method based on only Gabor jet similarity. Also, it is shown that the proposed face recognition algorithm using this devised localization method and Gabor feature vectors performs better than the conventional face recognition algorithm using Gabor jet similarity-based localization method and Gabor feature vectors like EBGM. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20Recognition" title="Face Recognition">Face Recognition</a>, <a href="https://publications.waset.org/search?q=AAM" title=" AAM"> AAM</a>, <a href="https://publications.waset.org/search?q=Gabor%20features" title=" Gabor features"> Gabor features</a>, <a href="https://publications.waset.org/search?q=EBGM." title=" EBGM."> EBGM.</a> </p> <a href="https://publications.waset.org/3583/robust-face-recognition-using-aam-and-gabor-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3583/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3583/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3583/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3583/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3583/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3583/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3583/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3583/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3583/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3583/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3583.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2206</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2190</span> Curvelet Features with Mouth and Face Edge Ratios for Facial Expression Identification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=S.%20Kherchaoui">S. Kherchaoui</a>, <a href="https://publications.waset.org/search?q=A.%20Houacine"> A. Houacine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper presents a facial expression recognition system. It performs identification and classification of the seven basic expressions; happy, surprise, fear, disgust, sadness, anger, and neutral states. It consists of three main parts. The first one is the detection of a face and the corresponding facial features to extract the most expressive portion of the face, followed by a normalization of the region of interest. Then calculus of curvelet coefficients is performed with dimensionality reduction through principal component analysis. The resulting coefficients are combined with two ratios; mouth ratio and face edge ratio to constitute the whole feature vector. The third step is the classification of the emotional state using the SVM method in the feature space.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Facial%20expression%20identification" title="Facial expression identification">Facial expression identification</a>, <a href="https://publications.waset.org/search?q=curvelet%20coefficients" title=" curvelet coefficients"> curvelet coefficients</a>, <a href="https://publications.waset.org/search?q=support%20vector%20machine%20%28SVM%29." title=" support vector machine (SVM)."> support vector machine (SVM).</a> </p> <a href="https://publications.waset.org/9998501/curvelet-features-with-mouth-and-face-edge-ratios-for-facial-expression-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9998501/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9998501/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9998501/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9998501/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9998501/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9998501/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9998501/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9998501/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9998501/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9998501/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9998501.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1842</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2189</span> Unequal Error Protection of Facial Features for Personal ID Images Coding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=T.%20Hirner">T. Hirner</a>, <a href="https://publications.waset.org/search?q=J.%20Polec"> J. Polec</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an approach for an unequal error protection of facial features of personal ID images coding. We consider unequal error protection (UEP) strategies for the efficient progressive transmission of embedded image codes over noisy channels. This new method is based on the progressive image compression embedded zerotree wavelet (EZW) algorithm and UEP technique with defined region of interest (ROI). In this case is ROI equal facial features within personal ID image. ROI technique is important in applications with different parts of importance. In ROI coding, a chosen ROI is encoded with higher quality than the background (BG). Unequal error protection of image is provided by different coding techniques and encoding LL band separately. In our proposed method, image is divided into two parts (ROI, BG) that consist of more important bytes (MIB) and less important bytes (LIB). The proposed unequal error protection of image transmission has shown to be more appropriate to low bit rate applications, producing better quality output for ROI of the compresses image. The experimental results verify effectiveness of the design. The results of our method demonstrate the comparison of the UEP of image transmission with defined ROI with facial features and the equal error protection (EEP) over additive white gaussian noise (AWGN) channel. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Embedded%20zerotree%20wavelet%20%28EZW%29" title="Embedded zerotree wavelet (EZW)">Embedded zerotree wavelet (EZW)</a>, <a href="https://publications.waset.org/search?q=equal%20error%20protection%20%28EEP%29" title=" equal error protection (EEP)"> equal error protection (EEP)</a>, <a href="https://publications.waset.org/search?q=facial%20features" title=" facial features"> facial features</a>, <a href="https://publications.waset.org/search?q=personal%20ID%20images" title=" personal ID images"> personal ID images</a>, <a href="https://publications.waset.org/search?q=region%20of%20interest%20%28ROI%29" title=" region of interest (ROI)"> region of interest (ROI)</a>, <a href="https://publications.waset.org/search?q=unequal%20error%20protection%20%28UEP%29" title=" unequal error protection (UEP)"> unequal error protection (UEP)</a> </p> <a href="https://publications.waset.org/3990/unequal-error-protection-of-facial-features-for-personal-id-images-coding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3990/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3990/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3990/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3990/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3990/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3990/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3990/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3990/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3990/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3990/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3990.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1490</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2188</span> Human Facial Expression Recognition using MANFIS Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=V.%20Gomathi">V. Gomathi</a>, <a href="https://publications.waset.org/search?q=Dr.%20K.%20Ramar"> Dr. K. Ramar</a>, <a href="https://publications.waset.org/search?q=A.%20Santhiyaku%20Jeevakumar"> A. Santhiyaku Jeevakumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial expression analysis plays a significant role for human computer interaction. Automatic analysis of human facial expression is still a challenging problem with many applications. In this paper, we propose neuro-fuzzy based automatic facial expression recognition system to recognize the human facial expressions like happy, fear, sad, angry, disgust and surprise. Initially facial image is segmented into three regions from which the uniform Local Binary Pattern (LBP) texture features distributions are extracted and represented as a histogram descriptor. The facial expressions are recognized using Multiple Adaptive Neuro Fuzzy Inference System (MANFIS). The proposed system designed and tested with JAFFE face database. The proposed model reports 94.29% of classification accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Adaptive%20neuro-fuzzy%20inference%20system" title="Adaptive neuro-fuzzy inference system">Adaptive neuro-fuzzy inference system</a>, <a href="https://publications.waset.org/search?q=Facialexpression" title=" Facialexpression"> Facialexpression</a>, <a href="https://publications.waset.org/search?q=Local%20binary%20pattern" title=" Local binary pattern"> Local binary pattern</a>, <a href="https://publications.waset.org/search?q=Uniform%20Histogram" title=" Uniform Histogram"> Uniform Histogram</a> </p> <a href="https://publications.waset.org/13988/human-facial-expression-recognition-using-manfis-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/13988/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/13988/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/13988/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/13988/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/13988/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/13988/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/13988/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/13988/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/13988/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/13988/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/13988.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2103</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2187</span> Road Extraction Using Stationary Wavelet Transform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Somkait%20Udomhunsakul"> Somkait Udomhunsakul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, a novel road extraction method using Stationary Wavelet Transform is proposed. To detect road features from color aerial satellite imagery, Mexican hat Wavelet filters are used by applying the Stationary Wavelet Transform in a multiresolution, multi-scale, sense and forming the products of Wavelet coefficients at a different scales to locate and identify road features at a few scales. In addition, the shifting of road features locations is considered through multiple scales for robust road extraction in the asymmetry road feature profiles. From the experimental results, the proposed method leads to a useful technique to form the basis of road feature extraction. Also, the method is general and can be applied to other features in imagery.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Road%20extraction" title=" Road extraction"> Road extraction</a>, <a href="https://publications.waset.org/search?q=Multiresolution" title=" Multiresolution"> Multiresolution</a>, <a href="https://publications.waset.org/search?q=Stationary%20Wavelet%20Transform" title=" Stationary Wavelet Transform"> Stationary Wavelet Transform</a>, <a href="https://publications.waset.org/search?q=Multi-scale%20analysis" title=" Multi-scale analysis"> Multi-scale analysis</a> </p> <a href="https://publications.waset.org/16132/road-extraction-using-stationary-wavelet-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/16132/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/16132/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/16132/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/16132/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/16132/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/16132/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/16132/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/16132/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/16132/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/16132/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/16132.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1878</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2186</span> An Automatic Feature Extraction Technique for 2D Punch Shapes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Awais%20Ahmad%20Khan">Awais Ahmad Khan</a>, <a href="https://publications.waset.org/search?q=Emad%20Abouel%20Nasr"> Emad Abouel Nasr</a>, <a href="https://publications.waset.org/search?q=H.%20M.%20A.%20Hussein"> H. M. A. Hussein</a>, <a href="https://publications.waset.org/search?q=Abdulrahman%20Al-Ahmari"> Abdulrahman Al-Ahmari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Sheet-metal parts have been widely applied in electronics, communication and mechanical industries in recent decades; but the advancement in sheet-metal part design and manufacturing is still behind in comparison with the increasing importance of sheet-metal parts in modern industry. This paper presents a methodology for automatic extraction of some common 2D internal sheet metal features. The features used in this study are taken from Unipunch ™ catalogue. The extraction process starts with the data extraction from STEP file using an object oriented approach and with the application of suitable algorithms and rules, all features contained in the catalogue are automatically extracted. Since the extracted features include geometry and engineering information, they will be effective for downstream application such as feature rebuilding and process planning.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Feature%20Extraction" title="Feature Extraction">Feature Extraction</a>, <a href="https://publications.waset.org/search?q=Internal%20Features" title=" Internal Features"> Internal Features</a>, <a href="https://publications.waset.org/search?q=Punch%20Shapes" title=" Punch Shapes"> Punch Shapes</a>, <a href="https://publications.waset.org/search?q=Sheet%20metal" title=" Sheet metal"> Sheet metal</a>, <a href="https://publications.waset.org/search?q=STEP." title=" STEP."> STEP.</a> </p> <a href="https://publications.waset.org/10004369/an-automatic-feature-extraction-technique-for-2d-punch-shapes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10004369/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10004369/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10004369/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10004369/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10004369/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10004369/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10004369/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10004369/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10004369/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10004369/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10004369.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2092</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2185</span> Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Kyi%20Pyar%20Zaw">Kyi Pyar Zaw</a>, <a href="https://publications.waset.org/search?q=Zin%20Mar%20Kyu"> Zin Mar Kyu </a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Chain%20code%20frequency" title="Chain code frequency">Chain code frequency</a>, <a href="https://publications.waset.org/search?q=character%20recognition" title=" character recognition"> character recognition</a>, <a href="https://publications.waset.org/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/search?q=features%20matching" title=" features matching"> features matching</a>, <a href="https://publications.waset.org/search?q=segmentation." title=" segmentation."> segmentation.</a> </p> <a href="https://publications.waset.org/10009080/myanmar-character-recognition-using-eight-direction-chain-code-frequency-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10009080/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10009080/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10009080/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10009080/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10009080/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10009080/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10009080/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10009080/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10009080/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10009080/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10009080.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">753</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2184</span> Study of Features for Hand-printed Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Satish%20Kumar">Satish Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The feature extraction method(s) used to recognize hand-printed characters play an important role in ICR applications. In order to achieve high recognition rate for a recognition system, the choice of a feature that suits for the given script is certainly an important task. Even if a new feature required to be designed for a given script, it is essential to know the recognition ability of the existing features for that script. Devanagari script is being used in various Indian languages besides Hindi the mother tongue of majority of Indians. This research examines a variety of feature extraction approaches, which have been used in various ICR/OCR applications, in context to Devanagari hand-printed script. The study is conducted theoretically and experimentally on more that 10 feature extraction methods. The various feature extraction methods have been evaluated on Devanagari hand-printed database comprising more than 25000 characters belonging to 43 alphabets. The recognition ability of the features have been evaluated using three classifiers i.e. k-NN, MLP and SVM. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Features" title="Features">Features</a>, <a href="https://publications.waset.org/search?q=Hand-printed" title=" Hand-printed"> Hand-printed</a>, <a href="https://publications.waset.org/search?q=Devanagari" title=" Devanagari"> Devanagari</a>, <a href="https://publications.waset.org/search?q=Classifier" title=" Classifier"> Classifier</a>, <a href="https://publications.waset.org/search?q=Database" title=" Database"> Database</a> </p> <a href="https://publications.waset.org/8497/study-of-features-for-hand-printed-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/8497/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/8497/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/8497/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/8497/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/8497/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/8497/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/8497/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/8497/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/8497/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/8497/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/8497.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1729</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2183</span> Face Recognition Using Discrete Orthogonal Hahn Moments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Fatima%20Akhmedova">Fatima Akhmedova</a>, <a href="https://publications.waset.org/search?q=Simon%20Liao"> Simon Liao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the most critical decision points in the design of a face recognition system is the choice of an appropriate face representation. Effective feature descriptors are expected to convey sufficient, invariant and non-redundant facial information. In this work we propose a set of Hahn moments as a new approach for feature description. Hahn moments have been widely used in image analysis due to their invariance, nonredundancy and the ability to extract features either globally and locally. To assess the applicability of Hahn moments to Face Recognition we conduct two experiments on the Olivetti Research Laboratory (ORL) database and University of Notre-Dame (UND) X1 biometric collection. Fusion of the global features along with the features from local facial regions are used as an input for the conventional k-NN classifier. The method reaches an accuracy of 93% of correctly recognized subjects for the ORL database and 94% for the UND database. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20Recognition" title="Face Recognition">Face Recognition</a>, <a href="https://publications.waset.org/search?q=Hahn%20moments" title=" Hahn moments"> Hahn moments</a>, <a href="https://publications.waset.org/search?q=Recognition-by-parts" title=" Recognition-by-parts"> Recognition-by-parts</a>, <a href="https://publications.waset.org/search?q=Time-lapse." title=" Time-lapse."> Time-lapse.</a> </p> <a href="https://publications.waset.org/10002256/face-recognition-using-discrete-orthogonal-hahn-moments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10002256/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10002256/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10002256/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10002256/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10002256/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10002256/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10002256/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10002256/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10002256/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10002256/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10002256.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1777</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2182</span> Enhancing capabilities of Texture Extraction for Color Image Retrieval</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Pranam%20Janney">Pranam Janney</a>, <a href="https://publications.waset.org/search?q=Sridhar%20G"> Sridhar G</a>, <a href="https://publications.waset.org/search?q=Sridhar%20V."> Sridhar V.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Content-Based Image Retrieval has been a major area of research in recent years. Efficient image retrieval with high precision would require an approach which combines usage of both the color and texture features of the image. In this paper we propose a method for enhancing the capabilities of texture based feature extraction and further demonstrate the use of these enhanced texture features in Texture-Based Color Image Retrieval. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Image%20retrieval" title="Image retrieval">Image retrieval</a>, <a href="https://publications.waset.org/search?q=texture%20feature%20extraction" title=" texture feature extraction"> texture feature extraction</a>, <a href="https://publications.waset.org/search?q=color%0Aextraction" title=" color extraction"> color extraction</a> </p> <a href="https://publications.waset.org/12008/enhancing-capabilities-of-texture-extraction-for-color-image-retrieval" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/12008/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/12008/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/12008/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/12008/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/12008/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/12008/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/12008/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/12008/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/12008/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/12008/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/12008.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1622</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2181</span> Automatic Extraction of Features and Opinion-Oriented Sentences from Customer Reviews</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Khairullah%20Khan">Khairullah Khan</a>, <a href="https://publications.waset.org/search?q=Baharum%20B.%20Baharudin"> Baharum B. Baharudin</a>, <a href="https://publications.waset.org/search?q=Aurangzeb%20Khan"> Aurangzeb Khan</a>, <a href="https://publications.waset.org/search?q=Fazal_e_Malik"> Fazal_e_Malik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Opinion extraction about products from customer reviews is becoming an interesting area of research. Customer reviews about products are nowadays available from blogs and review sites. Also tools are being developed for extraction of opinion from these reviews to help the user as well merchants to track the most suitable choice of product. Therefore efficient method and techniques are needed to extract opinions from review and blogs. As reviews of products mostly contains discussion about the features, functions and services, therefore, efficient techniques are required to extract user comments about the desired features, functions and services. In this paper we have proposed a novel idea to find features of product from user review in an efficient way. Our focus in this paper is to get the features and opinion-oriented words about products from text through auxiliary verbs (AV) {is, was, are, were, has, have, had}. From the results of our experiments we found that 82% of features and 85% of opinion-oriented sentences include AVs. Thus these AVs are good indicators of features and opinion orientation in customer reviews. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Classification" title="Classification">Classification</a>, <a href="https://publications.waset.org/search?q=Customer%20Reviews" title=" Customer Reviews"> Customer Reviews</a>, <a href="https://publications.waset.org/search?q=Helping%20Verbs" title=" Helping Verbs"> Helping Verbs</a>, <a href="https://publications.waset.org/search?q=Opinion%20Mining." title="Opinion Mining.">Opinion Mining.</a> </p> <a href="https://publications.waset.org/10183/automatic-extraction-of-features-and-opinion-oriented-sentences-from-customer-reviews" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10183/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10183/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10183/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10183/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10183/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10183/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10183/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10183/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10183/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10183/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10183.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2096</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2180</span> Face Localization and Recognition in Varied Expressions and Illumination</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hui-Yu%20Huang">Hui-Yu Huang</a>, <a href="https://publications.waset.org/search?q=Shih-Hang%20Hsu"> Shih-Hang Hsu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>In this paper, we propose a robust scheme to work face alignment and recognition under various influences. For face representation, illumination influence and variable expressions are the important factors, especially the accuracy of facial localization and face recognition. In order to solve those of factors, we propose a robust approach to overcome these problems. This approach consists of two phases. One phase is preprocessed for face images by means of the proposed illumination normalization method. The location of facial features can fit more efficient and fast based on the proposed image blending. On the other hand, based on template matching, we further improve the active shape models (called as IASM) to locate the face shape more precise which can gain the recognized rate in the next phase. The other phase is to process feature extraction by using principal component analysis and face recognition by using support vector machine classifiers. The results show that this proposed method can obtain good facial localization and face recognition with varied illumination and local distortion.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Gabor%20filter" title="Gabor filter">Gabor filter</a>, <a href="https://publications.waset.org/search?q=improved%20active%20shape%20model%20%28IASM%29" title=" improved active shape model (IASM)"> improved active shape model (IASM)</a>, <a href="https://publications.waset.org/search?q=principal%20component%20analysis%20%28PCA%29" title=" principal component analysis (PCA)"> principal component analysis (PCA)</a>, <a href="https://publications.waset.org/search?q=face%20alignment" title=" face alignment"> face alignment</a>, <a href="https://publications.waset.org/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/search?q=support%20vector%20machine%20%28SVM%29" title=" support vector machine (SVM)"> support vector machine (SVM)</a> </p> <a href="https://publications.waset.org/5662/face-localization-and-recognition-in-varied-expressions-and-illumination" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5662/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5662/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5662/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5662/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5662/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5662/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5662/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5662/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5662/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5662/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5662.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1491</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2179</span> Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Marie%20Alaghband">Marie Alaghband</a>, <a href="https://publications.waset.org/search?q=Niloofar%20Yousefi"> Niloofar Yousefi</a>, <a href="https://publications.waset.org/search?q=Ivan%20Garibay"> Ivan Garibay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image’s facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Annotated%20Facial%20Expression%20Dataset" title="Annotated Facial Expression Dataset">Annotated Facial Expression Dataset</a>, <a href="https://publications.waset.org/search?q=Sign%20Language%0D%0ARecognition" title=" Sign Language Recognition"> Sign Language Recognition</a>, <a href="https://publications.waset.org/search?q=Gesture%20Recognition" title=" Gesture Recognition"> Gesture Recognition</a>, <a href="https://publications.waset.org/search?q=Sequenced%20Facial%20Expression%0D%0ADataset." title=" Sequenced Facial Expression Dataset."> Sequenced Facial Expression Dataset.</a> </p> <a href="https://publications.waset.org/10011933/facial-expression-phoenix-feph-an-annotated-sequenced-dataset-for-facial-and-emotion-specified-expressions-in-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10011933/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10011933/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10011933/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10011933/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10011933/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10011933/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10011933/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10011933/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10011933/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10011933/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10011933.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">721</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2178</span> A Hybrid Method for Eyes Detection in Facial Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Muhammad%20Shafi">Muhammad Shafi</a>, <a href="https://publications.waset.org/search?q=Paul%20W.%20H.%20Chung"> Paul W. H. Chung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a hybrid method for eyes localization in facial images. The novelty is in combining techniques that utilise colour, edge and illumination cues to improve accuracy. The method is based on the observation that eye regions have dark colour, high density of edges and low illumination as compared to other parts of face. The first step in the method is to extract connected regions from facial images using colour, edge density and illumination cues separately. Some of the regions are then removed by applying rules that are based on the general geometry and shape of eyes. The remaining connected regions obtained through these three cues are then combined in a systematic way to enhance the identification of the candidate regions for the eyes. The geometry and shape based rules are then applied again to further remove the false eye regions. The proposed method was tested using images from the PICS facial images database. The proposed method has 93.7% and 87% accuracies for initial blobs extraction and final eye detection respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Erosion" title="Erosion">Erosion</a>, <a href="https://publications.waset.org/search?q=dilation" title=" dilation"> dilation</a>, <a href="https://publications.waset.org/search?q=Edge-density" title=" Edge-density"> Edge-density</a> </p> <a href="https://publications.waset.org/7427/a-hybrid-method-for-eyes-detection-in-facial-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/7427/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/7427/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/7427/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/7427/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/7427/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/7427/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/7427/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/7427/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/7427/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/7427/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/7427.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2050</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2177</span> Local Spectrum Feature Extraction for Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Muhammad%20Imran%20Ahmad">Muhammad Imran Ahmad</a>, <a href="https://publications.waset.org/search?q=Ruzelita%20Ngadiran"> Ruzelita Ngadiran</a>, <a href="https://publications.waset.org/search?q=Mohd%20Nazrin%20Md%20Isa"> Mohd Nazrin Md Isa</a>, <a href="https://publications.waset.org/search?q=Nor%20Ashidi%20Mat%20Isa"> Nor Ashidi Mat Isa</a>, <a href="https://publications.waset.org/search?q=Mohd%20Zaizu%20Ilyas"> Mohd Zaizu Ilyas</a>, <a href="https://publications.waset.org/search?q=Raja%20Abdullah%20Raja%20Ahmad"> Raja Abdullah Raja Ahmad</a>, <a href="https://publications.waset.org/search?q=Said%20Amirul%20Anwar%20Ab%20Hamid"> Said Amirul Anwar Ab Hamid</a>, <a href="https://publications.waset.org/search?q=Muzammil%20Jusoh"> Muzammil Jusoh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>This paper presents two techniques, local feature extraction using image spectrum and low frequency spectrum modelling using GMM to capture the underlying statistical information to improve the performance of face recognition system. Local spectrum features are extracted using overlap sub block window that are mapped on the face image. For each of this block, spatial domain is transformed to frequency domain using DFT. A low frequency coefficient is preserved by discarding high frequency coefficients by applying rectangular mask on the spectrum of the facial image. Low frequency information is non- Gaussian in the feature space and by using combination of several Gaussian functions that has different statistical properties, the best feature representation can be modelled using probability density function. The recognition process is performed using maximum likelihood value computed using pre-calculated GMM components. The method is tested using FERET datasets and is able to achieved 92% recognition rates.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Local%20features%20modelling" title="Local features modelling">Local features modelling</a>, <a href="https://publications.waset.org/search?q=face%20recognition%20system" title=" face recognition system"> face recognition system</a>, <a href="https://publications.waset.org/search?q=Gaussian%20mixture%20models." title=" Gaussian mixture models."> Gaussian mixture models.</a> </p> <a href="https://publications.waset.org/10000915/local-spectrum-feature-extraction-for-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10000915/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10000915/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10000915/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10000915/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10000915/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10000915/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10000915/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10000915/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10000915/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10000915/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10000915.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">2253</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2176</span> Feature Reduction of Nearest Neighbor Classifiers using Genetic Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=M.%20Analoui">M. Analoui</a>, <a href="https://publications.waset.org/search?q=M.%20Fadavi%20Amiri"> M. Fadavi Amiri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The design of a pattern classifier includes an attempt to select, among a set of possible features, a minimum subset of weakly correlated features that better discriminate the pattern classes. This is usually a difficult task in practice, normally requiring the application of heuristic knowledge about the specific problem domain. The selection and quality of the features representing each pattern have a considerable bearing on the success of subsequent pattern classification. Feature extraction is the process of deriving new features from the original features in order to reduce the cost of feature measurement, increase classifier efficiency, and allow higher classification accuracy. Many current feature extraction techniques involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. While this is useful for data visualization and increasing classification efficiency, it does not necessarily reduce the number of features that must be measured since each new feature may be a linear combination of all of the features in the original pattern vector. In this paper a new approach is presented to feature extraction in which feature selection, feature extraction, and classifier training are performed simultaneously using a genetic algorithm. In this approach each feature value is first normalized by a linear equation, then scaled by the associated weight prior to training, testing, and classification. A knn classifier is used to evaluate each set of feature weights. The genetic algorithm optimizes a vector of feature weights, which are used to scale the individual features in the original pattern vectors in either a linear or a nonlinear fashion. By this approach, the number of features used in classifying can be finely reduced. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Feature%20reduction" title="Feature reduction">Feature reduction</a>, <a href="https://publications.waset.org/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/search?q=pattern%0Aclassification" title=" pattern classification"> pattern classification</a>, <a href="https://publications.waset.org/search?q=nearest%20neighbor%20rule%20classifiers%20%28k-NNR%29." title=" nearest neighbor rule classifiers (k-NNR)."> nearest neighbor rule classifiers (k-NNR).</a> </p> <a href="https://publications.waset.org/6432/feature-reduction-of-nearest-neighbor-classifiers-using-genetic-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/6432/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/6432/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/6432/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/6432/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/6432/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/6432/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/6432/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/6432/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/6432/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/6432/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/6432.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1768</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2175</span> Improved Feature Extraction Technique for Handling Occlusion in Automatic Facial Expression Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Khadijat%20T.%20Bamigbade">Khadijat T. Bamigbade</a>, <a href="https://publications.waset.org/search?q=Olufade%20F.%20W.%20Onifade"> Olufade F. W. Onifade</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The field of automatic facial expression analysis has been an active research area in the last two decades. Its vast applicability in various domains has drawn so much attention into developing techniques and dataset that mirror real life scenarios. Many techniques such as Local Binary Patterns and its variants (CLBP, LBP-TOP) and lately, deep learning techniques, have been used for facial expression recognition. However, the problem of occlusion has not been sufficiently handled, making their results not applicable in real life situations. This paper develops a simple, yet highly efficient method tagged Local Binary Pattern-Histogram of Gradient (LBP-HOG) with occlusion detection in face image, using a multi-class SVM for Action Unit and in turn expression recognition. Our method was evaluated on three publicly available datasets which are JAFFE, CK, SFEW. Experimental results showed that our approach performed considerably well when compared with state-of-the-art algorithms and gave insight to occlusion detection as a key step to handling expression in wild.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Automatic%20facial%20expression%20analysis" title="Automatic facial expression analysis">Automatic facial expression analysis</a>, <a href="https://publications.waset.org/search?q=local%20binary%20pattern" title=" local binary pattern"> local binary pattern</a>, <a href="https://publications.waset.org/search?q=LBP-HOG" title=" LBP-HOG"> LBP-HOG</a>, <a href="https://publications.waset.org/search?q=occlusion%20detection." title=" occlusion detection."> occlusion detection.</a> </p> <a href="https://publications.waset.org/10010270/improved-feature-extraction-technique-for-handling-occlusion-in-automatic-facial-expression-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/10010270/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/10010270/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/10010270/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/10010270/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/10010270/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/10010270/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/10010270/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/10010270/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/10010270/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/10010270/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/10010270.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">783</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2174</span> Emotion Classification for Students with Autism in Mathematics E-learning using Physiological and Facial Expression Measures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Hui-Chuan%20Chu">Hui-Chuan Chu</a>, <a href="https://publications.waset.org/search?q=Min-Ju%20Liao"> Min-Ju Liao</a>, <a href="https://publications.waset.org/search?q=Wei-Kai%20Cheng"> Wei-Kai Cheng</a>, <a href="https://publications.waset.org/search?q=William%20Wei-Jen%20Tsai"> William Wei-Jen Tsai</a>, <a href="https://publications.waset.org/search?q=Yuh-Min%20Chen"> Yuh-Min Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>Avoiding learning failures in mathematics e-learning environments caused by emotional problems in students with autism has become an important topic for combining of special education with information and communications technology. This study presents an adaptive emotional adjustment model in mathematics e-learning for students with autism, emphasizing the lack of emotional perception in mathematics e-learning systems. In addition, an emotion classification for students with autism was developed by inducing emotions in mathematical learning environments to record changes in the physiological signals and facial expressions of students. Using these methods, 58 emotional features were obtained. These features were then processed using one-way ANOVA and information gain (IG). After reducing the feature dimension, methods of support vector machines (SVM), k-nearest neighbors (KNN), and classification and regression trees (CART) were used to classify four emotional categories: baseline, happy, angry, and anxious. After testing and comparisons, in a situation without feature selection, the accuracy rate of the SVM classification can reach as high as 79.3-%. After using IG to reduce the feature dimension, with only 28 features remaining, SVM still has a classification accuracy of 78.2-%. The results of this research could enhance the effectiveness of eLearning in special education.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Emotion%20classification" title="Emotion classification">Emotion classification</a>, <a href="https://publications.waset.org/search?q=Physiological%20and%20facial%20Expression%20measures" title=" Physiological and facial Expression measures"> Physiological and facial Expression measures</a>, <a href="https://publications.waset.org/search?q=Students%20with%20autism" title=" Students with autism"> Students with autism</a>, <a href="https://publications.waset.org/search?q=Mathematics%20e-learning." title=" Mathematics e-learning."> Mathematics e-learning.</a> </p> <a href="https://publications.waset.org/9119/emotion-classification-for-students-with-autism-in-mathematics-e-learning-using-physiological-and-facial-expression-measures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/9119/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/9119/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/9119/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/9119/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/9119/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/9119/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/9119/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/9119/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/9119/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/9119/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/9119.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1781</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2173</span> The Effect of Facial Expressions on Students in Virtual Educational Environments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=G.%20Theonas">G. Theonas</a>, <a href="https://publications.waset.org/search?q=D.%20Hobbs"> D. Hobbs</a>, <a href="https://publications.waset.org/search?q=D.%20Rigas"> D. Rigas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>The scope of this research was to study the relation between the facial expressions of three lecturers in a real academic lecture theatre and the reactions of the students to those expressions. The first experiment aimed to investigate the effectiveness of a virtual lecturer-s expressions on the students- learning outcome in a virtual pedagogical environment. The second experiment studied the effectiveness of a single facial expression, i.e. the smile, on the students- performance. Both experiments involved virtual lectures, with virtual lecturers teaching real students. The results suggest that the students performed better by 86%, in the lectures where the lecturer performed facial expressions compared to the results of the lectures that did not use facial expressions. However, when simple or basic information was used, the facial expressions of the virtual lecturer had no substantial effect on the students- learning outcome. Finally, the appropriate use of smiles increased the interest of the students and consequently their performance.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=emotion" title="emotion">emotion</a>, <a href="https://publications.waset.org/search?q=facial%20expression" title=" facial expression"> facial expression</a>, <a href="https://publications.waset.org/search?q=smile" title=" smile"> smile</a>, <a href="https://publications.waset.org/search?q=virtual%20educational%20environment" title=" virtual educational environment"> virtual educational environment</a>, <a href="https://publications.waset.org/search?q=virtual%20learning" title=" virtual learning"> virtual learning</a>, <a href="https://publications.waset.org/search?q=virtual%20lecturer." title=" virtual lecturer."> virtual lecturer.</a> </p> <a href="https://publications.waset.org/3623/the-effect-of-facial-expressions-on-students-in-virtual-educational-environments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/3623/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/3623/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/3623/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/3623/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/3623/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/3623/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/3623/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/3623/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/3623/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/3623/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/3623.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1986</span> </span> </div> </div> <div class="card publication-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2172</span> Fusion Classifier for Open-Set Face Recognition with Pose Variations</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/search?q=Gee-Sern%20Jison%20Hsu">Gee-Sern Jison Hsu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> <p>A fusion classifier composed of two modules, one made by a hidden Markov model (HMM) and the other by a support vector machine (SVM), is proposed to recognize faces with pose variations in open-set recognition settings. The HMM module captures the evolution of facial features across a subject-s face using the subject-s facial images only, without referencing to the faces of others. Because of the captured evolutionary process of facial features, the HMM module retains certain robustness against pose variations, yielding low false rejection rates (FRR) for recognizing faces across poses. This is, however, on the price of poor false acceptance rates (FAR) when recognizing other faces because it is built upon withinclass samples only. The SVM module in the proposed model is developed following a special design able to substantially diminish the FAR and further lower down the FRR. The proposed fusion classifier has been evaluated in performance using the CMU PIE database, and proven effective for open-set face recognition with pose variations. Experiments have also shown that it outperforms the face classifier made by HMM or SVM alone.</p> <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/search?q=Face%20recognition" title="Face recognition">Face recognition</a>, <a href="https://publications.waset.org/search?q=open-set%20identification" title=" open-set identification"> open-set identification</a>, <a href="https://publications.waset.org/search?q=hidden%20Markov%20model" title=" hidden Markov model"> hidden Markov model</a>, <a href="https://publications.waset.org/search?q=support%20vector%20machines." title=" support vector machines."> support vector machines.</a> </p> <a href="https://publications.waset.org/5636/fusion-classifier-for-open-set-face-recognition-with-pose-variations" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/5636/apa" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">APA</a> <a href="https://publications.waset.org/5636/bibtex" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">BibTeX</a> <a href="https://publications.waset.org/5636/chicago" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Chicago</a> <a href="https://publications.waset.org/5636/endnote" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">EndNote</a> <a href="https://publications.waset.org/5636/harvard" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">Harvard</a> <a href="https://publications.waset.org/5636/json" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">JSON</a> <a href="https://publications.waset.org/5636/mla" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">MLA</a> <a href="https://publications.waset.org/5636/ris" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">RIS</a> <a href="https://publications.waset.org/5636/xml" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">XML</a> <a href="https://publications.waset.org/5636/iso690" target="_blank" rel="nofollow" class="btn btn-primary btn-sm">ISO 690</a> <a href="https://publications.waset.org/5636.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1691</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Facial%20Features%0AExtraction&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Facial%20Features%0AExtraction&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Facial%20Features%0AExtraction&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Facial%20Features%0AExtraction&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Facial%20Features%0AExtraction&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Facial%20Features%0AExtraction&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Facial%20Features%0AExtraction&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Facial%20Features%0AExtraction&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Facial%20Features%0AExtraction&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Facial%20Features%0AExtraction&page=73">73</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Facial%20Features%0AExtraction&page=74">74</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/search?q=Facial%20Features%0AExtraction&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>