CINXE.COM
Search results for: pose and spontaneous facial expression
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: pose and spontaneous facial expression</title> <meta name="description" content="Search results for: pose and spontaneous facial expression"> <meta name="keywords" content="pose and spontaneous facial expression"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="pose and spontaneous facial expression" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="pose and spontaneous facial expression"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3123</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: pose and spontaneous facial expression</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3123</span> KSVD-SVM Approach for Spontaneous Facial Expression Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dawood%20Al%20Chanti">Dawood Al Chanti</a>, <a href="https://publications.waset.org/abstracts/search?q=Alice%20Caplier"> Alice Caplier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sparse representations of signals have received a great deal of attention in recent years. In this paper, the interest of using sparse representation as a mean for performing sparse discriminative analysis between spontaneous facial expressions is demonstrated. An automatic facial expressions recognition system is presented. It uses a KSVD-SVM approach which is made of three main stages: A pre-processing and feature extraction stage, which solves the problem of shared subspace distribution based on the random projection theory, to obtain low dimensional discriminative and reconstructive features; A dictionary learning and sparse coding stage, which uses the KSVD model to learn discriminative under or over dictionaries for sparse coding; Finally a classification stage, which uses a SVM classifier for facial expressions recognition. Our main concern is to be able to recognize non-basic affective states and non-acted expressions. Extensive experiments on the JAFFE static acted facial expressions database but also on the DynEmo dynamic spontaneous facial expressions database exhibit very good recognition rates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dictionary%20learning" title="dictionary learning">dictionary learning</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20projection" title=" random projection"> random projection</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression" title=" pose and spontaneous facial expression"> pose and spontaneous facial expression</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a> </p> <a href="https://publications.waset.org/abstracts/51683/ksvd-svm-approach-for-spontaneous-facial-expression-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51683.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">305</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3122</span> Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marie%20Alaghband">Marie Alaghband</a>, <a href="https://publications.waset.org/abstracts/search?q=Niloofar%20Yousefi"> Niloofar Yousefi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ivan%20Garibay"> Ivan Garibay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image’s facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=annotated%20facial%20expression%20dataset" title="annotated facial expression dataset">annotated facial expression dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sequenced%20facial%20expression%20dataset" title=" sequenced facial expression dataset"> sequenced facial expression dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title=" sign language recognition"> sign language recognition</a> </p> <a href="https://publications.waset.org/abstracts/129717/facial-expression-phoenix-feph-an-annotated-sequenced-dataset-for-facial-and-emotion-specified-expressions-in-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129717.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3121</span> Facial Pose Classification Using Hilbert Space Filling Curve and Multidimensional Scaling</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mekam%C4%B1%20Hayet">Mekamı Hayet</a>, <a href="https://publications.waset.org/abstracts/search?q=Bounoua%20Nacer"> Bounoua Nacer</a>, <a href="https://publications.waset.org/abstracts/search?q=Benabderrahmane%20Sidahmed"> Benabderrahmane Sidahmed</a>, <a href="https://publications.waset.org/abstracts/search?q=Taleb%20Ahmed"> Taleb Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Pose estimation is an important task in computer vision. Though the majority of the existing solutions provide good accuracy results, they are often overly complex and computationally expensive. In this perspective, we propose the use of dimensionality reduction techniques to address the problem of facial pose estimation. Firstly, a face image is converted into one-dimensional time series using Hilbert space filling curve, then the approach converts these time series data to a symbolic representation. Furthermore, a distance matrix is calculated between symbolic series of an input learning dataset of images, to generate classifiers of frontal vs. profile face pose. The proposed method is evaluated with three public datasets. Experimental results have shown that our approach is able to achieve a correct classification rate exceeding 97% with K-NN algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title="machine learning">machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20pose%20classification" title=" facial pose classification"> facial pose classification</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20series" title=" time series "> time series </a> </p> <a href="https://publications.waset.org/abstracts/33324/facial-pose-classification-using-hilbert-space-filling-curve-and-multidimensional-scaling" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33324.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">350</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3120</span> Emotion Recognition Using Artificial Intelligence</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rahul%20Mohite">Rahul Mohite</a>, <a href="https://publications.waset.org/abstracts/search?q=Lahcen%20Ouarbya"> Lahcen Ouarbya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper focuses on the interplay between humans and computer systems and the ability of these systems to understand and respond to human emotions, including non-verbal communication. Current emotion recognition systems are based solely on either facial or verbal expressions. The limitation of these systems is that it requires large training data sets. The paper proposes a system for recognizing human emotions that combines both speech and emotion recognition. The system utilizes advanced techniques such as deep learning and image recognition to identify facial expressions and comprehend emotions. The results show that the proposed system, based on the combination of facial expression and speech, outperforms existing ones, which are based solely either on facial or verbal expressions. The proposed system detects human emotion with an accuracy of 86%, whereas the existing systems have an accuracy of 70% using verbal expression only and 76% using facial expression only. In this paper, the increasing significance and demand for facial recognition technology in emotion recognition are also discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20reputation" title="facial reputation">facial reputation</a>, <a href="https://publications.waset.org/abstracts/search?q=expression%20reputation" title=" expression reputation"> expression reputation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20gaining%20knowledge%20of" title=" deep gaining knowledge of"> deep gaining knowledge of</a>, <a href="https://publications.waset.org/abstracts/search?q=photo%20reputation" title=" photo reputation"> photo reputation</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20technology" title=" facial technology"> facial technology</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20processing" title=" sign processing"> sign processing</a>, <a href="https://publications.waset.org/abstracts/search?q=photo%20type" title=" photo type"> photo type</a> </p> <a href="https://publications.waset.org/abstracts/162386/emotion-recognition-using-artificial-intelligence" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162386.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3119</span> Use of Computer and Machine Learning in Facial Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Neha%20Singh">Neha Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Ananya%20Arora"> Ananya Arora</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial expression measurement plays a crucial role in the identification of emotion. Facial expression plays a key role in psychophysiology, neural bases, and emotional disorder, to name a few. The Facial Action Coding System (FACS) has proven to be the most efficient and widely used of the various systems used to describe facial expressions. Coders can manually code facial expressions with FACS and, by viewing video-recorded facial behaviour at a specified frame rate and slow motion, can decompose into action units (AUs). Action units are the most minor visually discriminable facial movements. FACS explicitly differentiates between facial actions and inferences about what the actions mean. Action units are the fundamental unit of FACS methodology. It is regarded as the standard measure for facial behaviour and finds its application in various fields of study beyond emotion science. These include facial neuromuscular disorders, neuroscience, computer vision, computer graphics and animation, and face encoding for digital processing. This paper discusses the conceptual basis for FACS, a numerical listing of discrete facial movements identified by the system, the system's psychometric evaluation, and the software's recommended training requirements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20action" title="facial action">facial action</a>, <a href="https://publications.waset.org/abstracts/search?q=action%20units" title=" action units"> action units</a>, <a href="https://publications.waset.org/abstracts/search?q=coding" title=" coding"> coding</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/161142/use-of-computer-and-machine-learning-in-facial-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">106</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3118</span> Emotion Recognition with Occlusions Based on Facial Expression Reconstruction and Weber Local Descriptor</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jadisha%20Cornejo">Jadisha Cornejo</a>, <a href="https://publications.waset.org/abstracts/search?q=Helio%20Pedrini"> Helio Pedrini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recognition of emotions based on facial expressions has received increasing attention from the scientific community over the last years. Several fields of applications can benefit from facial emotion recognition, such as behavior prediction, interpersonal relations, human-computer interactions, recommendation systems. In this work, we develop and analyze an emotion recognition framework based on facial expressions robust to occlusions through the Weber Local Descriptor (WLD). Initially, the occluded facial expressions are reconstructed following an extension approach of Robust Principal Component Analysis (RPCA). Then, WLD features are extracted from the facial expression representation, as well as Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG). The feature vector space is reduced using Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). Finally, K-Nearest Neighbor (K-NN) and Support Vector Machine (SVM) classifiers are used to recognize the expressions. Experimental results on three public datasets demonstrated that the WLD representation achieved competitive accuracy rates for occluded and non-occluded facial expressions compared to other approaches available in the literature. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title="emotion recognition">emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20expression" title=" facial expression"> facial expression</a>, <a href="https://publications.waset.org/abstracts/search?q=occlusion" title=" occlusion"> occlusion</a>, <a href="https://publications.waset.org/abstracts/search?q=fiducial%20landmarks" title=" fiducial landmarks"> fiducial landmarks</a> </p> <a href="https://publications.waset.org/abstracts/90510/emotion-recognition-with-occlusions-based-on-facial-expression-reconstruction-and-weber-local-descriptor" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/90510.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">182</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3117</span> Facial Expression Recognition Using Sparse Gaussian Conditional Random Field</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammadamin%20Abbasnejad">Mohammadamin Abbasnejad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The analysis of expression and facial Action Units (AUs) detection are very important tasks in fields of computer vision and Human Computer Interaction (HCI) due to the wide range of applications in human life. Many works have been done during the past few years which has their own advantages and disadvantages. In this work, we present a new model based on Gaussian Conditional Random Field. We solve our objective problem using ADMM and we show how well the proposed model works. We train and test our work on two facial expression datasets, CK+, and RU-FACS. Experimental evaluation shows that our proposed approach outperform state of the art expression recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20Conditional%20Random%20Field" title="Gaussian Conditional Random Field">Gaussian Conditional Random Field</a>, <a href="https://publications.waset.org/abstracts/search?q=ADMM" title=" ADMM"> ADMM</a>, <a href="https://publications.waset.org/abstracts/search?q=convergence" title=" convergence"> convergence</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20descent" title=" gradient descent"> gradient descent</a> </p> <a href="https://publications.waset.org/abstracts/26245/facial-expression-recognition-using-sparse-gaussian-conditional-random-field" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26245.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">356</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3116</span> Improved Feature Extraction Technique for Handling Occlusion in Automatic Facial Expression Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khadijat%20T.%20Bamigbade">Khadijat T. Bamigbade</a>, <a href="https://publications.waset.org/abstracts/search?q=Olufade%20F.%20W.%20Onifade"> Olufade F. W. Onifade</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The field of automatic facial expression analysis has been an active research area in the last two decades. Its vast applicability in various domains has drawn so much attention into developing techniques and dataset that mirror real life scenarios. Many techniques such as Local Binary Patterns and its variants (CLBP, LBP-TOP) and lately, deep learning techniques, have been used for facial expression recognition. However, the problem of occlusion has not been sufficiently handled, making their results not applicable in real life situations. This paper develops a simple, yet highly efficient method tagged Local Binary Pattern-Histogram of Gradient (LBP-HOG) with occlusion detection in face image, using a multi-class SVM for Action Unit and in turn expression recognition. Our method was evaluated on three publicly available datasets which are JAFFE, CK, SFEW. Experimental results showed that our approach performed considerably well when compared with state-of-the-art algorithms and gave insight to occlusion detection as a key step to handling expression in wild. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20facial%20expression%20analysis" title="automatic facial expression analysis">automatic facial expression analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20binary%20pattern" title=" local binary pattern"> local binary pattern</a>, <a href="https://publications.waset.org/abstracts/search?q=LBP-HOG" title=" LBP-HOG"> LBP-HOG</a>, <a href="https://publications.waset.org/abstracts/search?q=occlusion%20detection" title=" occlusion detection"> occlusion detection</a> </p> <a href="https://publications.waset.org/abstracts/105048/improved-feature-extraction-technique-for-handling-occlusion-in-automatic-facial-expression-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/105048.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">169</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3115</span> Curvelet Features with Mouth and Face Edge Ratios for Facial Expression Identification </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Kherchaoui">S. Kherchaoui</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Houacine"> A. Houacine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a facial expression recognition system. It performs identification and classification of the seven basic expressions; happy, surprise, fear, disgust, sadness, anger, and neutral states. It consists of three main parts. The first one is the detection of a face and the corresponding facial features to extract the most expressive portion of the face, followed by a normalization of the region of interest. Then calculus of curvelet coefficients is performed with dimensionality reduction through principal component analysis. The resulting coefficients are combined with two ratios; mouth ratio and face edge ratio to constitute the whole feature vector. The third step is the classification of the emotional state using the SVM method in the feature space. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification" title="facial expression identification">facial expression identification</a>, <a href="https://publications.waset.org/abstracts/search?q=curvelet%20coefficient" title=" curvelet coefficient"> curvelet coefficient</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine%20%28SVM%29" title=" support vector machine (SVM)"> support vector machine (SVM)</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition%20system" title=" recognition system"> recognition system</a> </p> <a href="https://publications.waset.org/abstracts/10311/curvelet-features-with-mouth-and-face-edge-ratios-for-facial-expression-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10311.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3114</span> Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ksheeraj%20Sai%20Vepuri">Ksheeraj Sai Vepuri</a>, <a href="https://publications.waset.org/abstracts/search?q=Nada%20Attar"> Nada Attar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20recognittion" title="facial expression recognittion">facial expression recognittion</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20preprocessing" title=" image preprocessing"> image preprocessing</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a> </p> <a href="https://publications.waset.org/abstracts/130679/improving-the-performance-of-deep-learning-in-facial-emotion-recognition-with-image-sharpening" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130679.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">143</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3113</span> Classifying Facial Expressions Based on a Motion Local Appearance Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fabiola%20M.%20Villalobos-Castaldi">Fabiola M. Villalobos-Castaldi</a>, <a href="https://publications.waset.org/abstracts/search?q=Nicol%C3%A1s%20C.%20Kemper"> Nicolás C. Kemper</a>, <a href="https://publications.waset.org/abstracts/search?q=Esther%20Rojas-Krugger"> Esther Rojas-Krugger</a>, <a href="https://publications.waset.org/abstracts/search?q=Laura%20G.%20Ram%C3%ADrez-S%C3%A1nchez"> Laura G. Ramírez-Sánchez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the classification results about exploring the combination of a motion based approach with a local appearance method to describe the facial motion caused by the muscle contractions and expansions that are presented in facial expressions. The proposed feature extraction method take advantage of the knowledge related to which parts of the face reflects the highest deformations, so we selected 4 specific facial regions at which the appearance descriptor were applied. The most common used approaches for feature extraction are the holistic and the local strategies. In this work we present the results of using a local appearance approach estimating the correlation coefficient to the 4 corresponding landmark-localized facial templates of the expression face related to the neutral face. The results let us to probe how the proposed motion estimation scheme based on the local appearance correlation computation can simply and intuitively measure the motion parameters for some of the most relevant facial regions and how these parameters can be used to recognize facial expressions automatically. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20recognition%20system" title="facial expression recognition system">facial expression recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=local-appearance%20method" title=" local-appearance method"> local-appearance method</a>, <a href="https://publications.waset.org/abstracts/search?q=motion-based%20approach" title=" motion-based approach"> motion-based approach</a> </p> <a href="https://publications.waset.org/abstracts/27632/classifying-facial-expressions-based-on-a-motion-local-appearance-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27632.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">413</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3112</span> Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vesna%20Kirandziska">Vesna Kirandziska</a>, <a href="https://publications.waset.org/abstracts/search?q=Nevena%20Ackovska"> Nevena Ackovska</a>, <a href="https://publications.waset.org/abstracts/search?q=Ana%20Madevska%20Bogdanova"> Ana Madevska Bogdanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title="emotion recognition">emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/42384/comparing-emotion-recognition-from-voice-and-facial-data-using-time-invariant-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42384.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">316</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3111</span> Application of Vector Representation for Revealing the Richness of Meaning of Facial Expressions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Carmel%20Sofer">Carmel Sofer</a>, <a href="https://publications.waset.org/abstracts/search?q=Dan%20Vilenchik"> Dan Vilenchik</a>, <a href="https://publications.waset.org/abstracts/search?q=Ron%20Dotsch"> Ron Dotsch</a>, <a href="https://publications.waset.org/abstracts/search?q=Galia%20Avidan"> Galia Avidan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Studies investigating emotional facial expressions typically reveal consensus among observes regarding the meaning of basic expressions, whose number ranges between 6 to 15 emotional states. Given this limited number of discrete expressions, how is it that the human vocabulary of emotional states is so rich? The present study argues that perceivers use sequences of these discrete expressions as the basis for a much richer vocabulary of emotional states. Such mechanisms, in which a relatively small number of basic components is expanded to a much larger number of possible combinations of meanings, exist in other human communications modalities, such as spoken language and music. In these modalities, letters and notes, which serve as basic components of spoken language and music respectively, are temporally linked, resulting in the richness of expressions. In the current study, in each trial participants were presented with sequences of two images containing facial expression in different combinations sampled out of the eight static basic expressions (total 64; 8X8). In each trial, using single word participants were required to judge the 'state of mind' portrayed by the person whose face was presented. Utilizing word embedding methods (Global Vectors for Word Representation), employed in the field of Natural Language Processing, and relying on machine learning computational methods, it was found that the perceived meanings of the sequences of facial expressions were a weighted average of the single expressions comprising them, resulting in 22 new emotional states, in addition to the eight, classic basic expressions. An interaction between the first and the second expression in each sequence indicated that every single facial expression modulated the effect of the other facial expression thus leading to a different interpretation ascribed to the sequence as a whole. These findings suggest that the vocabulary of emotional states conveyed by facial expressions is not restricted to the (small) number of discrete facial expressions. Rather, the vocabulary is rich, as it results from combinations of these expressions. In addition, present research suggests that using word embedding in social perception studies, can be a powerful, accurate and efficient tool, to capture explicit and implicit perceptions and intentions. Acknowledgment: The study was supported by a grant from the Ministry of Defense in Israel to GA and CS. CS is also supported by the ABC initiative in Ben-Gurion University of the Negev. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Glove" title="Glove">Glove</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20perception" title=" face perception"> face perception</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20perception." title=" facial expression perception. "> facial expression perception. </a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20production" title=" facial expression production"> facial expression production</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=word%20embedding" title=" word embedding"> word embedding</a>, <a href="https://publications.waset.org/abstracts/search?q=word2vec" title=" word2vec"> word2vec</a> </p> <a href="https://publications.waset.org/abstracts/83508/application-of-vector-representation-for-revealing-the-richness-of-meaning-of-facial-expressions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/83508.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">176</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3110</span> Image Processing techniques for Surveillance in Outdoor Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jayanth%20C.">Jayanth C.</a>, <a href="https://publications.waset.org/abstracts/search?q=Anirudh%20Sai%20Yetikuri"> Anirudh Sai Yetikuri</a>, <a href="https://publications.waset.org/abstracts/search?q=Kavitha%20S.%20N."> Kavitha S. N.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper explores the development and application of computer vision and machine learning techniques for real-time pose detection, facial recognition, and number plate extraction. Utilizing MediaPipe for pose estimation, the research presents methods for detecting hand raises and ducking postures through real-time video analysis. Complementarily, facial recognition is employed to compare and verify individual identities using the face recognition library. Additionally, the paper demonstrates a robust approach for extracting and storing vehicle number plates from images, integrating Optical Character Recognition (OCR) with a database management system. The study highlights the effectiveness and versatility of these technologies in practical scenarios, including security and surveillance applications. The findings underscore the potential of combining computer vision techniques to address diverse challenges and enhance automated systems for both individual and vehicular identification. This research contributes to the fields of computer vision and machine learning by providing scalable solutions and demonstrating their applicability in real-world contexts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20detection" title=" pose detection"> pose detection</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=number%20plate%20extraction" title=" number plate extraction"> number plate extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20analysis" title=" real-time analysis"> real-time analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=OCR" title=" OCR"> OCR</a>, <a href="https://publications.waset.org/abstracts/search?q=database%20management" title=" database management"> database management</a> </p> <a href="https://publications.waset.org/abstracts/191153/image-processing-techniques-for-surveillance-in-outdoor-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/191153.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">26</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3109</span> A Novel Method for Face Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.%20Abas%20Nejad">H. Abas Nejad</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20R.%20Teymoori"> A. R. Teymoori</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neutral%20vs.%20emotion%20classification" title="neutral vs. emotion classification">neutral vs. emotion classification</a>, <a href="https://publications.waset.org/abstracts/search?q=Constrained%20Local%20Model" title=" Constrained Local Model"> Constrained Local Model</a>, <a href="https://publications.waset.org/abstracts/search?q=procrustes%20analysis" title=" procrustes analysis"> procrustes analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=Local%20Binary%20Pattern%20Histogram" title=" Local Binary Pattern Histogram"> Local Binary Pattern Histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20model" title=" statistical model"> statistical model</a> </p> <a href="https://publications.waset.org/abstracts/36051/a-novel-method-for-face-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36051.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">338</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3108</span> Analysis and Detection of Facial Expressions in Autism Spectrum Disorder People Using Machine Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Maisam%20Abbas">Muhammad Maisam Abbas</a>, <a href="https://publications.waset.org/abstracts/search?q=Salman%20Tariq"> Salman Tariq</a>, <a href="https://publications.waset.org/abstracts/search?q=Usama%20Riaz"> Usama Riaz</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Tanveer"> Muhammad Tanveer</a>, <a href="https://publications.waset.org/abstracts/search?q=Humaira%20Abdul%20Ghafoor"> Humaira Abdul Ghafoor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Autism Spectrum Disorder (ASD) refers to a developmental disorder that impairs an individual's communication and interaction ability. Individuals feel difficult to read facial expressions while communicating or interacting. Facial Expression Recognition (FER) is a unique method of classifying basic human expressions, i.e., happiness, fear, surprise, sadness, disgust, neutral, and anger through static and dynamic sources. This paper conducts a comprehensive comparison and proposed optimal method for a continued research project—a system that can assist people who have Autism Spectrum Disorder (ASD) in recognizing facial expressions. Comparison has been conducted on three supervised learning algorithms EigenFace, FisherFace, and LBPH. The JAFFE, CK+, and TFEID (I&II) datasets have been used to train and test the algorithms. The results were then evaluated based on variance, standard deviation, and accuracy. The experiments showed that FisherFace has the highest accuracy for all datasets and is considered the best algorithm to be implemented in our system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autism%20spectrum%20disorder" title="autism spectrum disorder">autism spectrum disorder</a>, <a href="https://publications.waset.org/abstracts/search?q=ASD" title=" ASD"> ASD</a>, <a href="https://publications.waset.org/abstracts/search?q=EigenFace" title=" EigenFace"> EigenFace</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20recognition" title=" facial expression recognition"> facial expression recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=FisherFace" title=" FisherFace"> FisherFace</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20binary%20pattern%20histogram" title=" local binary pattern histogram"> local binary pattern histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=LBPH" title=" LBPH"> LBPH</a> </p> <a href="https://publications.waset.org/abstracts/129718/analysis-and-detection-of-facial-expressions-in-autism-spectrum-disorder-people-using-machine-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129718.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">174</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3107</span> Data Collection Techniques for Robotics to Identify the Facial Expressions of Traumatic Brain Injured Patients</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chaudhary%20Muhammad%20Aqdus%20Ilyas">Chaudhary Muhammad Aqdus Ilyas</a>, <a href="https://publications.waset.org/abstracts/search?q=Matthias%20Rehm"> Matthias Rehm</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamal%20Nasrollahi"> Kamal Nasrollahi</a>, <a href="https://publications.waset.org/abstracts/search?q=Thomas%20B.%20Moeslund"> Thomas B. Moeslund</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the investigation of data collection procedures, associated with robots when placed with traumatic brain injured (TBI) patients for rehabilitation purposes through facial expression and mood analysis. Rehabilitation after TBI is very crucial due to nature of injury and variation in recovery time. It is advantageous to analyze these emotional signals in a contactless manner, due to the non-supportive behavior of patients, limited muscle movements and increase in negative emotional expressions. This work aims at the development of framework where robots can recognize TBI emotions through facial expressions to perform rehabilitation tasks by physical, cognitive or interactive activities. The result of these studies shows that with customized data collection strategies, proposed framework identify facial and emotional expressions more accurately that can be utilized in enhancing recovery treatment and social interaction in robotic context. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network-%20long%20short%20term%20memory%20network%20%28CNN-LSTM%29" title=" convolution neural network- long short term memory network (CNN-LSTM)"> convolution neural network- long short term memory network (CNN-LSTM)</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20and%20mood%20recognition" title=" facial expression and mood recognition"> facial expression and mood recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20%28RGB-thermal%29%20analysis" title=" multimodal (RGB-thermal) analysis"> multimodal (RGB-thermal) analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=rehabilitation" title=" rehabilitation"> rehabilitation</a>, <a href="https://publications.waset.org/abstracts/search?q=robots" title=" robots"> robots</a>, <a href="https://publications.waset.org/abstracts/search?q=traumatic%20brain%20injured%20patients" title=" traumatic brain injured patients"> traumatic brain injured patients</a> </p> <a href="https://publications.waset.org/abstracts/98560/data-collection-techniques-for-robotics-to-identify-the-facial-expressions-of-traumatic-brain-injured-patients" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98560.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3106</span> Emotion Recognition in Video and Images in the Wild</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Faizan%20Tariq">Faizan Tariq</a>, <a href="https://publications.waset.org/abstracts/search?q=Moayid%20Ali%20Zaidi"> Moayid Ali Zaidi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial emotion recognition algorithms are expanding rapidly now a day. People are using different algorithms with different combinations to generate best results. There are six basic emotions which are being studied in this area. Author tried to recognize the facial expressions using object detector algorithms instead of traditional algorithms. Two object detection algorithms were chosen which are Faster R-CNN and YOLO. For pre-processing we used image rotation and batch normalization. The dataset I have chosen for the experiments is Static Facial Expression in Wild (SFEW). Our approach worked well but there is still a lot of room to improve it, which will be a future direction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title="face recognition">face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title=" emotion recognition"> emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a> </p> <a href="https://publications.waset.org/abstracts/152635/emotion-recognition-in-video-and-images-in-the-wild" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152635.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3105</span> Application of Infrared Thermal Imaging, Eye Tracking and Behavioral Analysis for Deception Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Petra%20Hyp%C5%A1ov%C3%A1">Petra Hypšová</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20Seitl"> Martin Seitl</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the challenges of forensic psychology is to detect deception during a face-to-face interview. In addition to the classical approaches of monitoring the utterance and its components, detection is also sought by observing behavioral and physiological changes that occur as a result of the increased emotional and cognitive load caused by the production of distorted information. Typical are changes in facial temperature, eye movements and their fixation, pupil dilation, emotional micro-expression, heart rate and its variability. Expanding technological capabilities have opened the space to detect these psychophysiological changes and behavioral manifestations through non-contact technologies that do not interfere with face-to-face interaction. Non-contact deception detection methodology is still in development, and there is a lack of studies that combine multiple non-contact technologies to investigate their accuracy, as well as studies that show how different types of lies produced by different interviewers affect physiological and behavioral changes. The main objective of this study is to apply a specific non-contact technology for deception detection. The next objective is to investigate scenarios in which non-contact deception detection is possible. A series of psychophysiological experiments using infrared thermal imaging, eye tracking and behavioral analysis with FaceReader 9.0 software was used to achieve our goals. In the laboratory experiment, 16 adults (12 women, 4 men) between 18 and 35 years of age (SD = 4.42) were instructed to produce alternating prepared and spontaneous truths and lies. The baseline of each proband was also measured, and its results were compared to the experimental conditions. Because the personality of the examiner (particularly gender and facial appearance) to whom the subject is lying can influence physiological and behavioral changes, the experiment included four different interviewers. The interviewer was represented by a photograph of a face that met the required parameters in terms of gender and facial appearance (i.e., interviewer likability/antipathy) to follow standardized procedures. The subject provided all information to the simulated interviewer. During follow-up analyzes, facial temperature (main ROIs: forehead, cheeks, the tip of the nose, chin, and corners of the eyes), heart rate, emotional expression, intensity and fixation of eye movements and pupil dilation were observed. The results showed that the variables studied varied with respect to the production of prepared truths and lies versus the production of spontaneous truths and lies, as well as the variability of the simulated interviewer. The results also supported the assumption of variability in physiological and behavioural values during the subject's resting state, the so-called baseline, and the production of prepared and spontaneous truths and lies. A series of psychophysiological experiments provided evidence of variability in the areas of interest in the production of truths and lies to different interviewers. The combination of technologies used also led to a comprehensive assessment of the physiological and behavioral changes associated with false and true statements. The study presented here opens the space for further research in the field of lie detection with non-contact technologies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotional%20expression%20decoding" title="emotional expression decoding">emotional expression decoding</a>, <a href="https://publications.waset.org/abstracts/search?q=eye-tracking" title=" eye-tracking"> eye-tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=functional%20infrared%20thermal%20imaging" title=" functional infrared thermal imaging"> functional infrared thermal imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=non-contact%20deception%20detection" title=" non-contact deception detection"> non-contact deception detection</a>, <a href="https://publications.waset.org/abstracts/search?q=psychophysiological%20experiment" title=" psychophysiological experiment"> psychophysiological experiment</a> </p> <a href="https://publications.waset.org/abstracts/151344/application-of-infrared-thermal-imaging-eye-tracking-and-behavioral-analysis-for-deception-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151344.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3104</span> Facial Emotion Recognition Using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashutosh%20Mishra">Ashutosh Mishra</a>, <a href="https://publications.waset.org/abstracts/search?q=Nikhil%20Goyal"> Nikhil Goyal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A 3D facial emotion recognition model based on deep learning is proposed in this paper. Two convolution layers and a pooling layer are employed in the deep learning architecture. After the convolution process, the pooling is finished. The probabilities for various classes of human faces are calculated using the sigmoid activation function. To verify the efficiency of deep learning-based systems, a set of faces. The Kaggle dataset is used to verify the accuracy of a deep learning-based face recognition model. The model's accuracy is about 65 percent, which is lower than that of other facial expression recognition techniques. Despite significant gains in representation precision due to the nonlinearity of profound image representations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title="facial recognition">facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20intelligence" title=" computational intelligence"> computational intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20map" title=" depth map"> depth map</a> </p> <a href="https://publications.waset.org/abstracts/139253/facial-emotion-recognition-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139253.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">231</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3103</span> Individualized Emotion Recognition Through Dual-Representations and Ground-Established Ground Truth</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Valentina%20Zhang">Valentina Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> While facial expression is a complex and individualized behavior, all facial emotion recognition (FER) systems known to us rely on a single facial representation and are trained on universal data. We conjecture that: (i) different facial representations can provide different, sometimes complementing views of emotions; (ii) when employed collectively in a discussion group setting, they enable more accurate emotion reading which is highly desirable in autism care and other applications context sensitive to errors. In this paper, we first study FER using pixel-based DL vs semantics-based DL in the context of deepfake videos. Our experiment indicates that while the semantics-trained model performs better with articulated facial feature changes, the pixel-trained model outperforms on subtle or rare facial expressions. Armed with these findings, we have constructed an adaptive FER system learning from both types of models for dyadic or small interacting groups and further leveraging the synthesized group emotions as the ground truth for individualized FER training. Using a collection of group conversation videos, we demonstrate that FER accuracy and personalization can benefit from such an approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neurodivergence%20care" title="neurodivergence care">neurodivergence care</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20emotion%20recognition" title=" facial emotion recognition"> facial emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=ground%20truth%20for%20supervised%20learning" title=" ground truth for supervised learning"> ground truth for supervised learning</a> </p> <a href="https://publications.waset.org/abstracts/144009/individualized-emotion-recognition-through-dual-representations-and-ground-established-ground-truth" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144009.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3102</span> Technological Approach in Question Formation for Assessment of Interviewees</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Shujan">S. Shujan</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20T.%20Rupasinghe"> A. T. Rupasinghe</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20L.%20Gunawardena"> N. L. Gunawardena</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Numerous studies have determined that there is a direct correlation between the successful interviewee and the nonverbal behavioral patterns of that person during the interview. In this study, we focus on formations of interview questions in such a way that, it gets an opportunity for assessing interviewee through the answers using the nonverbal behavioral cues. From all the nonverbal behavioral factors we have identified, in this study priority is given to the ‘facial expression variations’ with the assistance of facial expression analytics tool; this research proposes a novel approach in question formation for the assessment of interviewees in ‘Software Industry’. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=assessments" title="assessments">assessments</a>, <a href="https://publications.waset.org/abstracts/search?q=hirability" title=" hirability"> hirability</a>, <a href="https://publications.waset.org/abstracts/search?q=interviews" title=" interviews"> interviews</a>, <a href="https://publications.waset.org/abstracts/search?q=non-verbal%20behaviour%20patterns" title=" non-verbal behaviour patterns"> non-verbal behaviour patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=question%20formation" title=" question formation"> question formation</a> </p> <a href="https://publications.waset.org/abstracts/41604/technological-approach-in-question-formation-for-assessment-of-interviewees" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41604.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3101</span> Surface Geodesic Derivative Pattern for Deformable Textured 3D Object Comparison: Application to Expression and Pose Invariant 3D Face Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Farshid%20Hajati">Farshid Hajati</a>, <a href="https://publications.waset.org/abstracts/search?q=Soheila%20Gheisari"> Soheila Gheisari</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Cheraghian"> Ali Cheraghian</a>, <a href="https://publications.waset.org/abstracts/search?q=Yongsheng%20Gao"> Yongsheng Gao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a new Surface Geodesic Derivative Pattern (SGDP) for matching textured deformable 3D surfaces. SGDP encodes micro-pattern features based on local surface higher-order derivative variation. It extracts local information by encoding various distinctive textural relationships contained in a geodesic neighborhood, hence fusing texture and range information of a surface at the data level. Geodesic texture rings are encoded into local patterns for similarity measurement between non-rigid 3D surfaces. The performance of the proposed method is evaluated extensively on the Bosphorus and FRGC v2 face databases. Compared to existing benchmarks, experimental results show the effectiveness and superiority of combining the texture and 3D shape data at the earliest level in recognizing typical deformable faces under expression, illumination, and pose variations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20face%20recognition" title="3D face recognition">3D face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=pose" title=" pose"> pose</a>, <a href="https://publications.waset.org/abstracts/search?q=expression" title=" expression"> expression</a>, <a href="https://publications.waset.org/abstracts/search?q=surface%20matching" title=" surface matching"> surface matching</a>, <a href="https://publications.waset.org/abstracts/search?q=texture" title=" texture"> texture</a> </p> <a href="https://publications.waset.org/abstracts/37340/surface-geodesic-derivative-pattern-for-deformable-textured-3d-object-comparison-application-to-expression-and-pose-invariant-3d-face-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37340.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">393</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3100</span> Management of Facial Nerve Palsy Following Physiotherapy </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bassam%20Band">Bassam Band</a>, <a href="https://publications.waset.org/abstracts/search?q=Simon%20Freeman"> Simon Freeman</a>, <a href="https://publications.waset.org/abstracts/search?q=Rohan%20Munir"> Rohan Munir</a>, <a href="https://publications.waset.org/abstracts/search?q=Hisham%20Band"> Hisham Band</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objective: To determine efficacy of facial physiotherapy provided for patients with facial nerve palsy. Design: Retrospective study Subjects: 54 patients diagnosed with Facial nerve palsy were included in the study after they met the selection criteria including unilateral facial paralysis and start of therapy twelve months after the onset of facial nerve palsy. Interventions: Patients received the treatment offered at a facial physiotherapy clinic consisting of: Trophic electrical stimulation, surface electromyography with biofeedback, neuromuscular re-education and myofascial release. Main measures: The Sunnybrook facial grading scale was used to evaluate the severity of facial paralysis. Results: This study demonstrated the positive impact of physiotherapy for patient with facial nerve palsy with improvement of 24.2% on the Sunnybrook facial grading score from a mean baseline of 34.2% to 58.2%. The greatest improvement looking at different causes was seen in patient who had reconstructive surgery post Acoustic Neuroma at 31.3%. Conclusion: The therapy shows significant improvement for patients with facial nerve palsy even when started 12 months post onset of paralysis across different causes. This highlights the benefit of this non-invasive technique in managing facial nerve paralysis and possibly preventing the need for surgery. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20nerve%20palsy" title="facial nerve palsy">facial nerve palsy</a>, <a href="https://publications.waset.org/abstracts/search?q=treatment" title=" treatment"> treatment</a>, <a href="https://publications.waset.org/abstracts/search?q=physiotherapy" title=" physiotherapy"> physiotherapy</a>, <a href="https://publications.waset.org/abstracts/search?q=bells%20palsy" title=" bells palsy"> bells palsy</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic%20neuroma" title=" acoustic neuroma"> acoustic neuroma</a>, <a href="https://publications.waset.org/abstracts/search?q=ramsey-hunt%20syndrome" title=" ramsey-hunt syndrome"> ramsey-hunt syndrome</a> </p> <a href="https://publications.waset.org/abstracts/19940/management-of-facial-nerve-palsy-following-physiotherapy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19940.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">535</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3099</span> Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Silvia%20Santano%20Guill%C3%A9n">Silvia Santano Guillén</a>, <a href="https://publications.waset.org/abstracts/search?q=Luigi%20Lo%20Iacono"> Luigi Lo Iacono</a>, <a href="https://publications.waset.org/abstracts/search?q=Christian%20Meder"> Christian Meder</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the main aims of current social robotic research is to improve the robots’ abilities to interact with humans. In order to achieve an interaction similar to that among humans, robots should be able to communicate in an intuitive and natural way and appropriately interpret human affects during social interactions. Similarly to how humans are able to recognize emotions in other humans, machines are capable of extracting information from the various ways humans convey emotions—including facial expression, speech, gesture or text—and using this information for improved human computer interaction. This can be described as Affective Computing, an interdisciplinary field that expands into otherwise unrelated fields like psychology and cognitive science and involves the research and development of systems that can recognize and interpret human affects. To leverage these emotional capabilities by embedding them in humanoid robots is the foundation of the concept Affective Robots, which has the objective of making robots capable of sensing the user’s current mood and personality traits and adapt their behavior in the most appropriate manner based on that. In this paper, the emotion recognition capabilities of the humanoid robot Pepper are experimentally explored, based on the facial expressions for the so-called basic emotions, as well as how it performs in contrast to other state-of-the-art approaches with both expression databases compiled in academic environments and real subjects showing posed expressions as well as spontaneous emotional reactions. The experiments’ results show that the detection accuracy amongst the evaluated approaches differs substantially. The introduced experiments offer a general structure and approach for conducting such experimental evaluations. The paper further suggests that the most meaningful results are obtained by conducting experiments with real subjects expressing the emotions as spontaneous reactions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=affective%20computing" title="affective computing">affective computing</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title=" emotion recognition"> emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=humanoid%20robot" title=" humanoid robot"> humanoid robot</a>, <a href="https://publications.waset.org/abstracts/search?q=human-robot-interaction%20%28HRI%29" title=" human-robot-interaction (HRI)"> human-robot-interaction (HRI)</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20robots" title=" social robots"> social robots</a> </p> <a href="https://publications.waset.org/abstracts/78467/affective-robots-evaluation-of-automatic-emotion-recognition-approaches-on-a-humanoid-robot-towards-emotionally-intelligent-machines" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78467.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">235</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3098</span> Automatic Facial Skin Segmentation Using Possibilistic C-Means Algorithm for Evaluation of Facial Surgeries</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elham%20Alaee">Elham Alaee</a>, <a href="https://publications.waset.org/abstracts/search?q=Mousa%20Shamsi"> Mousa Shamsi</a>, <a href="https://publications.waset.org/abstracts/search?q=Hossein%20Ahmadi"> Hossein Ahmadi</a>, <a href="https://publications.waset.org/abstracts/search?q=Soroosh%20Nazem"> Soroosh Nazem</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Hossein%20Sedaaghi"> Mohammad Hossein Sedaaghi </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human face has a fundamental role in the appearance of individuals. So the importance of facial surgeries is undeniable. Thus, there is a need for the appropriate and accurate facial skin segmentation in order to extract different features. Since Fuzzy C-Means (FCM) clustering algorithm doesn’t work appropriately for noisy images and outliers, in this paper we exploit Possibilistic C-Means (PCM) algorithm in order to segment the facial skin. For this purpose, first, we convert facial images from RGB to YCbCr color space. To evaluate performance of the proposed algorithm, the database of Sahand University of Technology, Tabriz, Iran was used. In order to have a better understanding from the proposed algorithm; FCM and Expectation-Maximization (EM) algorithms are also used for facial skin segmentation. The proposed method shows better results than the other segmentation methods. Results include misclassification error (0.032) and the region’s area error (0.045) for the proposed algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20image" title="facial image">facial image</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=PCM" title=" PCM"> PCM</a>, <a href="https://publications.waset.org/abstracts/search?q=FCM" title=" FCM"> FCM</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20error" title=" skin error"> skin error</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20surgery" title=" facial surgery"> facial surgery</a> </p> <a href="https://publications.waset.org/abstracts/10297/automatic-facial-skin-segmentation-using-possibilistic-c-means-algorithm-for-evaluation-of-facial-surgeries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10297.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">586</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3097</span> A Robust Spatial Feature Extraction Method for Facial Expression Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.%20G.%20C.%20P.%20Dinesh">H. G. C. P. Dinesh</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Tharshini"> G. Tharshini</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20P.%20B.%20Ekanayake"> M. P. B. Ekanayake</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20M.%20R.%20I.%20Godaliyadda"> G. M. R. I. Godaliyadda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a new spatial feature extraction method based on principle component analysis (PCA) and Fisher Discernment Analysis (FDA) for facial expression recognition. It not only extracts reliable features for classification, but also reduces the feature space dimensions of pattern samples. In this method, first each gray scale image is considered in its entirety as the measurement matrix. Then, principle components (PCs) of row vectors of this matrix and variance of these row vectors along PCs are estimated. Therefore, this method would ensure the preservation of spatial information of the facial image. Afterwards, by incorporating the spectral information of the eigen-filters derived from the PCs, a feature vector was constructed, for a given image. Finally, FDA was used to define a set of basis in a reduced dimension subspace such that the optimal clustering is achieved. The method of FDA defines an inter-class scatter matrix and intra-class scatter matrix to enhance the compactness of each cluster while maximizing the distance between cluster marginal points. In order to matching the test image with the training set, a cosine similarity based Bayesian classification was used. The proposed method was tested on the Cohn-Kanade database and JAFFE database. It was observed that the proposed method which incorporates spatial information to construct an optimal feature space outperforms the standard PCA and FDA based methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20recognition" title="facial expression recognition">facial expression recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=principle%20component%20analysis%20%28PCA%29" title=" principle component analysis (PCA)"> principle component analysis (PCA)</a>, <a href="https://publications.waset.org/abstracts/search?q=fisher%20discernment%20analysis%20%28FDA%29" title=" fisher discernment analysis (FDA)"> fisher discernment analysis (FDA)</a>, <a href="https://publications.waset.org/abstracts/search?q=eigen-filter" title=" eigen-filter"> eigen-filter</a>, <a href="https://publications.waset.org/abstracts/search?q=cosine%20similarity" title=" cosine similarity"> cosine similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=bayesian%20classifier" title=" bayesian classifier"> bayesian classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=f-measure" title=" f-measure"> f-measure</a> </p> <a href="https://publications.waset.org/abstracts/36459/a-robust-spatial-feature-extraction-method-for-facial-expression-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36459.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">425</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3096</span> A Theoretical Study on Pain Assessment through Human Facial Expresion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mrinal%20Kanti%20Bhowmik">Mrinal Kanti Bhowmik</a>, <a href="https://publications.waset.org/abstracts/search?q=Debanjana%20Debnath%20Jr."> Debanjana Debnath Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Debotosh%20Bhattacharjee"> Debotosh Bhattacharjee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A facial expression is undeniably the human manners. It is a significant channel for human communication and can be applied to extract emotional features accurately. People in pain often show variations in facial expressions that are readily observable to others. A core of actions is likely to occur or to increase in intensity when people are in pain. To illustrate the changes in the facial appearance, a system known as Facial Action Coding System (FACS) is pioneered by Ekman and Friesen for human observers. According to Prkachin and Solomon, a set of such actions carries the bulk of information about pain. Thus, the Prkachin and Solomon pain intensity (PSPI) metric is defined. So, it is very important to notice that facial expressions, being a behavioral source in communication media, provide an important opening into the issues of non-verbal communication in pain. People express their pain in many ways, and this pain behavior is the basis on which most inferences about pain are drawn in clinical and research settings. Hence, to understand the roles of different pain behaviors, it is essential to study the properties. For the past several years, the studies are concentrated on the properties of one specific form of pain behavior i.e. facial expression. This paper represents a comprehensive study on pain assessment that can model and estimate the intensity of pain that the patient is suffering. It also reviews the historical background of different pain assessment techniques in the context of painful expressions. Different approaches incorporate FACS from psychological views and a pain intensity score using the PSPI metric in pain estimation. This paper investigates in depth analysis of different approaches used in pain estimation and presents different observations found from each technique. It also offers a brief study on different distinguishing features of real and fake pain. Therefore, the necessity of the study lies in the emerging fields of painful face assessment in clinical settings. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20action%20coding%20system%20%28FACS%29" title="facial action coding system (FACS)">facial action coding system (FACS)</a>, <a href="https://publications.waset.org/abstracts/search?q=pain" title=" pain"> pain</a>, <a href="https://publications.waset.org/abstracts/search?q=pain%20behavior" title=" pain behavior"> pain behavior</a>, <a href="https://publications.waset.org/abstracts/search?q=Prkachin%20and%20Solomon%20pain%20intensity%20%28PSPI%29" title=" Prkachin and Solomon pain intensity (PSPI)"> Prkachin and Solomon pain intensity (PSPI)</a> </p> <a href="https://publications.waset.org/abstracts/22187/a-theoretical-study-on-pain-assessment-through-human-facial-expresion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22187.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">346</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3095</span> Context-Aware Recommender Systems Using User's Emotional State</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hoyeon%20Park">Hoyeon Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Kyoung-jae%20Kim"> Kyoung-jae Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The product recommendation is a field of research that has received much attention in the recent information overload phenomenon. The proliferation of the mobile environment and social media cannot help but affect the results of the recommendation depending on how the factors of the user's situation are reflected in the recommendation process. Recently, research has been spreading attention to the context-aware recommender system which is to reflect user's contextual information in the recommendation process. However, until now, most of the context-aware recommender system researches have been limited in that they reflect the passive context of users. It is expected that the user will be able to express his/her contextual information through his/her active behavior and the importance of the context-aware recommender system reflecting this information can be increased. The purpose of this study is to propose a context-aware recommender system that can reflect the user's emotional state as an active context information to recommendation process. The context-aware recommender system is a recommender system that can make more sophisticated recommendations by utilizing the user's contextual information and has an advantage that the user's emotional factor can be considered as compared with the existing recommender systems. In this study, we propose a method to infer the user's emotional state, which is one of the user's context information, by using the user's facial expression data and to reflect it on the recommendation process. This study collects the facial expression data of a user who is looking at a specific product and the user's product preference score. Then, we classify the facial expression data into several categories according to the previous research and construct a model that can predict them. Next, the predicted results are applied to existing collaborative filtering with contextual information. As a result of the study, it was shown that the recommended results of the context-aware recommender system including facial expression information show improved results in terms of recommendation performance. Based on the results of this study, it is expected that future research will be conducted on recommender system reflecting various contextual information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=context-aware" title="context-aware">context-aware</a>, <a href="https://publications.waset.org/abstracts/search?q=emotional%20state" title=" emotional state"> emotional state</a>, <a href="https://publications.waset.org/abstracts/search?q=recommender%20systems" title=" recommender systems"> recommender systems</a>, <a href="https://publications.waset.org/abstracts/search?q=business%20analytics" title=" business analytics"> business analytics</a> </p> <a href="https://publications.waset.org/abstracts/88567/context-aware-recommender-systems-using-users-emotional-state" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88567.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">229</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3094</span> Quantification and Preference of Facial Asymmetry of the Sub-Saharan Africans' 3D Facial Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anas%20Ibrahim%20Yahaya">Anas Ibrahim Yahaya</a>, <a href="https://publications.waset.org/abstracts/search?q=Christophe%20Soligo"> Christophe Soligo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A substantial body of literature has reported on facial symmetry and asymmetry and their role in human mate choice. However, major gaps persist, with nearly all data originating from the WEIRD (Western, Educated, Industrialised, Rich and Developed) populations, and results remaining largely equivocal when compared across studies. This study is aimed at quantifying facial asymmetry from the 3D faces of the Hausa of northern Nigeria and also aimed at determining their (Hausa) perceptions and judgements of standardised facial images with different levels of asymmetry using questionnaires. Data were analysed using R-studio software and results indicated that individuals with lower levels of facial asymmetry (near facial symmetry) were perceived as more attractive, more suitable as marriage partners and more caring, whereas individuals with higher levels of facial asymmetry were perceived as more aggressive. The study conclusively asserts that all faces are asymmetric including the most beautiful ones, and the preference of less asymmetric faces was not just dependent on single facial trait, but rather on multiple facial traits; thus the study supports that physical attractiveness is not just an arbitrary social construct, but at least in part a cue to general health and possibly related to environmental context. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face" title="face">face</a>, <a href="https://publications.waset.org/abstracts/search?q=asymmetry" title=" asymmetry"> asymmetry</a>, <a href="https://publications.waset.org/abstracts/search?q=symmetry" title=" symmetry"> symmetry</a>, <a href="https://publications.waset.org/abstracts/search?q=Hausa" title=" Hausa"> Hausa</a>, <a href="https://publications.waset.org/abstracts/search?q=preference" title=" preference"> preference</a> </p> <a href="https://publications.waset.org/abstracts/82975/quantification-and-preference-of-facial-asymmetry-of-the-sub-saharan-africans-3d-facial-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82975.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">193</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression&page=104">104</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression&page=105">105</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>