CINXE.COM

Search results for: facial expression identification

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: facial expression identification</title> <meta name="description" content="Search results for: facial expression identification"> <meta name="keywords" content="facial expression identification"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="facial expression identification" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="facial expression identification"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 5071</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: facial expression identification</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5071</span> Curvelet Features with Mouth and Face Edge Ratios for Facial Expression Identification </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Kherchaoui">S. Kherchaoui</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Houacine"> A. Houacine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a facial expression recognition system. It performs identification and classification of the seven basic expressions; happy, surprise, fear, disgust, sadness, anger, and neutral states. It consists of three main parts. The first one is the detection of a face and the corresponding facial features to extract the most expressive portion of the face, followed by a normalization of the region of interest. Then calculus of curvelet coefficients is performed with dimensionality reduction through principal component analysis. The resulting coefficients are combined with two ratios; mouth ratio and face edge ratio to constitute the whole feature vector. The third step is the classification of the emotional state using the SVM method in the feature space. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification" title="facial expression identification">facial expression identification</a>, <a href="https://publications.waset.org/abstracts/search?q=curvelet%20coefficient" title=" curvelet coefficient"> curvelet coefficient</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine%20%28SVM%29" title=" support vector machine (SVM)"> support vector machine (SVM)</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition%20system" title=" recognition system"> recognition system</a> </p> <a href="https://publications.waset.org/abstracts/10311/curvelet-features-with-mouth-and-face-edge-ratios-for-facial-expression-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10311.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5070</span> Use of Computer and Machine Learning in Facial Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Neha%20Singh">Neha Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Ananya%20Arora"> Ananya Arora</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial expression measurement plays a crucial role in the identification of emotion. Facial expression plays a key role in psychophysiology, neural bases, and emotional disorder, to name a few. The Facial Action Coding System (FACS) has proven to be the most efficient and widely used of the various systems used to describe facial expressions. Coders can manually code facial expressions with FACS and, by viewing video-recorded facial behaviour at a specified frame rate and slow motion, can decompose into action units (AUs). Action units are the most minor visually discriminable facial movements. FACS explicitly differentiates between facial actions and inferences about what the actions mean. Action units are the fundamental unit of FACS methodology. It is regarded as the standard measure for facial behaviour and finds its application in various fields of study beyond emotion science. These include facial neuromuscular disorders, neuroscience, computer vision, computer graphics and animation, and face encoding for digital processing. This paper discusses the conceptual basis for FACS, a numerical listing of discrete facial movements identified by the system, the system's psychometric evaluation, and the software's recommended training requirements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20action" title="facial action">facial action</a>, <a href="https://publications.waset.org/abstracts/search?q=action%20units" title=" action units"> action units</a>, <a href="https://publications.waset.org/abstracts/search?q=coding" title=" coding"> coding</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/161142/use-of-computer-and-machine-learning-in-facial-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">106</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5069</span> Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marie%20Alaghband">Marie Alaghband</a>, <a href="https://publications.waset.org/abstracts/search?q=Niloofar%20Yousefi"> Niloofar Yousefi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ivan%20Garibay"> Ivan Garibay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of &quot;sad&quot;, &quot;surprise&quot;, &quot;fear&quot;, &quot;angry&quot;, &quot;neutral&quot;, &quot;disgust&quot;, and &quot;happy&quot;. We also considered the &quot;None&quot; class if the image&rsquo;s facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=annotated%20facial%20expression%20dataset" title="annotated facial expression dataset">annotated facial expression dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sequenced%20facial%20expression%20dataset" title=" sequenced facial expression dataset"> sequenced facial expression dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title=" sign language recognition"> sign language recognition</a> </p> <a href="https://publications.waset.org/abstracts/129717/facial-expression-phoenix-feph-an-annotated-sequenced-dataset-for-facial-and-emotion-specified-expressions-in-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129717.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5068</span> Emotion Recognition Using Artificial Intelligence</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rahul%20Mohite">Rahul Mohite</a>, <a href="https://publications.waset.org/abstracts/search?q=Lahcen%20Ouarbya"> Lahcen Ouarbya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper focuses on the interplay between humans and computer systems and the ability of these systems to understand and respond to human emotions, including non-verbal communication. Current emotion recognition systems are based solely on either facial or verbal expressions. The limitation of these systems is that it requires large training data sets. The paper proposes a system for recognizing human emotions that combines both speech and emotion recognition. The system utilizes advanced techniques such as deep learning and image recognition to identify facial expressions and comprehend emotions. The results show that the proposed system, based on the combination of facial expression and speech, outperforms existing ones, which are based solely either on facial or verbal expressions. The proposed system detects human emotion with an accuracy of 86%, whereas the existing systems have an accuracy of 70% using verbal expression only and 76% using facial expression only. In this paper, the increasing significance and demand for facial recognition technology in emotion recognition are also discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20reputation" title="facial reputation">facial reputation</a>, <a href="https://publications.waset.org/abstracts/search?q=expression%20reputation" title=" expression reputation"> expression reputation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20gaining%20knowledge%20of" title=" deep gaining knowledge of"> deep gaining knowledge of</a>, <a href="https://publications.waset.org/abstracts/search?q=photo%20reputation" title=" photo reputation"> photo reputation</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20technology" title=" facial technology"> facial technology</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20processing" title=" sign processing"> sign processing</a>, <a href="https://publications.waset.org/abstracts/search?q=photo%20type" title=" photo type"> photo type</a> </p> <a href="https://publications.waset.org/abstracts/162386/emotion-recognition-using-artificial-intelligence" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162386.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5067</span> Emotion Recognition with Occlusions Based on Facial Expression Reconstruction and Weber Local Descriptor</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jadisha%20Cornejo">Jadisha Cornejo</a>, <a href="https://publications.waset.org/abstracts/search?q=Helio%20Pedrini"> Helio Pedrini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recognition of emotions based on facial expressions has received increasing attention from the scientific community over the last years. Several fields of applications can benefit from facial emotion recognition, such as behavior prediction, interpersonal relations, human-computer interactions, recommendation systems. In this work, we develop and analyze an emotion recognition framework based on facial expressions robust to occlusions through the Weber Local Descriptor (WLD). Initially, the occluded facial expressions are reconstructed following an extension approach of Robust Principal Component Analysis (RPCA). Then, WLD features are extracted from the facial expression representation, as well as Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG). The feature vector space is reduced using Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). Finally, K-Nearest Neighbor (K-NN) and Support Vector Machine (SVM) classifiers are used to recognize the expressions. Experimental results on three public datasets demonstrated that the WLD representation achieved competitive accuracy rates for occluded and non-occluded facial expressions compared to other approaches available in the literature. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title="emotion recognition">emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20expression" title=" facial expression"> facial expression</a>, <a href="https://publications.waset.org/abstracts/search?q=occlusion" title=" occlusion"> occlusion</a>, <a href="https://publications.waset.org/abstracts/search?q=fiducial%20landmarks" title=" fiducial landmarks"> fiducial landmarks</a> </p> <a href="https://publications.waset.org/abstracts/90510/emotion-recognition-with-occlusions-based-on-facial-expression-reconstruction-and-weber-local-descriptor" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/90510.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">182</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5066</span> Facial Expression Recognition Using Sparse Gaussian Conditional Random Field</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammadamin%20Abbasnejad">Mohammadamin Abbasnejad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The analysis of expression and facial Action Units (AUs) detection are very important tasks in fields of computer vision and Human Computer Interaction (HCI) due to the wide range of applications in human life. Many works have been done during the past few years which has their own advantages and disadvantages. In this work, we present a new model based on Gaussian Conditional Random Field. We solve our objective problem using ADMM and we show how well the proposed model works. We train and test our work on two facial expression datasets, CK+, and RU-FACS. Experimental evaluation shows that our proposed approach outperform state of the art expression recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20Conditional%20Random%20Field" title="Gaussian Conditional Random Field">Gaussian Conditional Random Field</a>, <a href="https://publications.waset.org/abstracts/search?q=ADMM" title=" ADMM"> ADMM</a>, <a href="https://publications.waset.org/abstracts/search?q=convergence" title=" convergence"> convergence</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20descent" title=" gradient descent"> gradient descent</a> </p> <a href="https://publications.waset.org/abstracts/26245/facial-expression-recognition-using-sparse-gaussian-conditional-random-field" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26245.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">356</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5065</span> Improved Feature Extraction Technique for Handling Occlusion in Automatic Facial Expression Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khadijat%20T.%20Bamigbade">Khadijat T. Bamigbade</a>, <a href="https://publications.waset.org/abstracts/search?q=Olufade%20F.%20W.%20Onifade"> Olufade F. W. Onifade</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The field of automatic facial expression analysis has been an active research area in the last two decades. Its vast applicability in various domains has drawn so much attention into developing techniques and dataset that mirror real life scenarios. Many techniques such as Local Binary Patterns and its variants (CLBP, LBP-TOP) and lately, deep learning techniques, have been used for facial expression recognition. However, the problem of occlusion has not been sufficiently handled, making their results not applicable in real life situations. This paper develops a simple, yet highly efficient method tagged Local Binary Pattern-Histogram of Gradient (LBP-HOG) with occlusion detection in face image, using a multi-class SVM for Action Unit and in turn expression recognition. Our method was evaluated on three publicly available datasets which are JAFFE, CK, SFEW. Experimental results showed that our approach performed considerably well when compared with state-of-the-art algorithms and gave insight to occlusion detection as a key step to handling expression in wild. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20facial%20expression%20analysis" title="automatic facial expression analysis">automatic facial expression analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20binary%20pattern" title=" local binary pattern"> local binary pattern</a>, <a href="https://publications.waset.org/abstracts/search?q=LBP-HOG" title=" LBP-HOG"> LBP-HOG</a>, <a href="https://publications.waset.org/abstracts/search?q=occlusion%20detection" title=" occlusion detection"> occlusion detection</a> </p> <a href="https://publications.waset.org/abstracts/105048/improved-feature-extraction-technique-for-handling-occlusion-in-automatic-facial-expression-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/105048.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">169</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5064</span> Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ksheeraj%20Sai%20Vepuri">Ksheeraj Sai Vepuri</a>, <a href="https://publications.waset.org/abstracts/search?q=Nada%20Attar"> Nada Attar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20recognittion" title="facial expression recognittion">facial expression recognittion</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20preprocessing" title=" image preprocessing"> image preprocessing</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a> </p> <a href="https://publications.waset.org/abstracts/130679/improving-the-performance-of-deep-learning-in-facial-emotion-recognition-with-image-sharpening" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130679.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">143</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5063</span> Classifying Facial Expressions Based on a Motion Local Appearance Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fabiola%20M.%20Villalobos-Castaldi">Fabiola M. Villalobos-Castaldi</a>, <a href="https://publications.waset.org/abstracts/search?q=Nicol%C3%A1s%20C.%20Kemper"> Nicolás C. Kemper</a>, <a href="https://publications.waset.org/abstracts/search?q=Esther%20Rojas-Krugger"> Esther Rojas-Krugger</a>, <a href="https://publications.waset.org/abstracts/search?q=Laura%20G.%20Ram%C3%ADrez-S%C3%A1nchez"> Laura G. Ramírez-Sánchez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the classification results about exploring the combination of a motion based approach with a local appearance method to describe the facial motion caused by the muscle contractions and expansions that are presented in facial expressions. The proposed feature extraction method take advantage of the knowledge related to which parts of the face reflects the highest deformations, so we selected 4 specific facial regions at which the appearance descriptor were applied. The most common used approaches for feature extraction are the holistic and the local strategies. In this work we present the results of using a local appearance approach estimating the correlation coefficient to the 4 corresponding landmark-localized facial templates of the expression face related to the neutral face. The results let us to probe how the proposed motion estimation scheme based on the local appearance correlation computation can simply and intuitively measure the motion parameters for some of the most relevant facial regions and how these parameters can be used to recognize facial expressions automatically. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20recognition%20system" title="facial expression recognition system">facial expression recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=local-appearance%20method" title=" local-appearance method"> local-appearance method</a>, <a href="https://publications.waset.org/abstracts/search?q=motion-based%20approach" title=" motion-based approach"> motion-based approach</a> </p> <a href="https://publications.waset.org/abstracts/27632/classifying-facial-expressions-based-on-a-motion-local-appearance-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27632.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">413</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5062</span> KSVD-SVM Approach for Spontaneous Facial Expression Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dawood%20Al%20Chanti">Dawood Al Chanti</a>, <a href="https://publications.waset.org/abstracts/search?q=Alice%20Caplier"> Alice Caplier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sparse representations of signals have received a great deal of attention in recent years. In this paper, the interest of using sparse representation as a mean for performing sparse discriminative analysis between spontaneous facial expressions is demonstrated. An automatic facial expressions recognition system is presented. It uses a KSVD-SVM approach which is made of three main stages: A pre-processing and feature extraction stage, which solves the problem of shared subspace distribution based on the random projection theory, to obtain low dimensional discriminative and reconstructive features; A dictionary learning and sparse coding stage, which uses the KSVD model to learn discriminative under or over dictionaries for sparse coding; Finally a classification stage, which uses a SVM classifier for facial expressions recognition. Our main concern is to be able to recognize non-basic affective states and non-acted expressions. Extensive experiments on the JAFFE static acted facial expressions database but also on the DynEmo dynamic spontaneous facial expressions database exhibit very good recognition rates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dictionary%20learning" title="dictionary learning">dictionary learning</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20projection" title=" random projection"> random projection</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression" title=" pose and spontaneous facial expression"> pose and spontaneous facial expression</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a> </p> <a href="https://publications.waset.org/abstracts/51683/ksvd-svm-approach-for-spontaneous-facial-expression-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51683.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">305</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5061</span> Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vesna%20Kirandziska">Vesna Kirandziska</a>, <a href="https://publications.waset.org/abstracts/search?q=Nevena%20Ackovska"> Nevena Ackovska</a>, <a href="https://publications.waset.org/abstracts/search?q=Ana%20Madevska%20Bogdanova"> Ana Madevska Bogdanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title="emotion recognition">emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/42384/comparing-emotion-recognition-from-voice-and-facial-data-using-time-invariant-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42384.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">316</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5060</span> Application of Vector Representation for Revealing the Richness of Meaning of Facial Expressions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Carmel%20Sofer">Carmel Sofer</a>, <a href="https://publications.waset.org/abstracts/search?q=Dan%20Vilenchik"> Dan Vilenchik</a>, <a href="https://publications.waset.org/abstracts/search?q=Ron%20Dotsch"> Ron Dotsch</a>, <a href="https://publications.waset.org/abstracts/search?q=Galia%20Avidan"> Galia Avidan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Studies investigating emotional facial expressions typically reveal consensus among observes regarding the meaning of basic expressions, whose number ranges between 6 to 15 emotional states. Given this limited number of discrete expressions, how is it that the human vocabulary of emotional states is so rich? The present study argues that perceivers use sequences of these discrete expressions as the basis for a much richer vocabulary of emotional states. Such mechanisms, in which a relatively small number of basic components is expanded to a much larger number of possible combinations of meanings, exist in other human communications modalities, such as spoken language and music. In these modalities, letters and notes, which serve as basic components of spoken language and music respectively, are temporally linked, resulting in the richness of expressions. In the current study, in each trial participants were presented with sequences of two images containing facial expression in different combinations sampled out of the eight static basic expressions (total 64; 8X8). In each trial, using single word participants were required to judge the 'state of mind' portrayed by the person whose face was presented. Utilizing word embedding methods (Global Vectors for Word Representation), employed in the field of Natural Language Processing, and relying on machine learning computational methods, it was found that the perceived meanings of the sequences of facial expressions were a weighted average of the single expressions comprising them, resulting in 22 new emotional states, in addition to the eight, classic basic expressions. An interaction between the first and the second expression in each sequence indicated that every single facial expression modulated the effect of the other facial expression thus leading to a different interpretation ascribed to the sequence as a whole. These findings suggest that the vocabulary of emotional states conveyed by facial expressions is not restricted to the (small) number of discrete facial expressions. Rather, the vocabulary is rich, as it results from combinations of these expressions. In addition, present research suggests that using word embedding in social perception studies, can be a powerful, accurate and efficient tool, to capture explicit and implicit perceptions and intentions. Acknowledgment: The study was supported by a grant from the Ministry of Defense in Israel to GA and CS. CS is also supported by the ABC initiative in Ben-Gurion University of the Negev. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Glove" title="Glove">Glove</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20perception" title=" face perception"> face perception</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20perception." title=" facial expression perception. "> facial expression perception. </a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20production" title=" facial expression production"> facial expression production</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=word%20embedding" title=" word embedding"> word embedding</a>, <a href="https://publications.waset.org/abstracts/search?q=word2vec" title=" word2vec"> word2vec</a> </p> <a href="https://publications.waset.org/abstracts/83508/application-of-vector-representation-for-revealing-the-richness-of-meaning-of-facial-expressions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/83508.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">176</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5059</span> Analysis and Detection of Facial Expressions in Autism Spectrum Disorder People Using Machine Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Maisam%20Abbas">Muhammad Maisam Abbas</a>, <a href="https://publications.waset.org/abstracts/search?q=Salman%20Tariq"> Salman Tariq</a>, <a href="https://publications.waset.org/abstracts/search?q=Usama%20Riaz"> Usama Riaz</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Tanveer"> Muhammad Tanveer</a>, <a href="https://publications.waset.org/abstracts/search?q=Humaira%20Abdul%20Ghafoor"> Humaira Abdul Ghafoor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Autism Spectrum Disorder (ASD) refers to a developmental disorder that impairs an individual's communication and interaction ability. Individuals feel difficult to read facial expressions while communicating or interacting. Facial Expression Recognition (FER) is a unique method of classifying basic human expressions, i.e., happiness, fear, surprise, sadness, disgust, neutral, and anger through static and dynamic sources. This paper conducts a comprehensive comparison and proposed optimal method for a continued research project—a system that can assist people who have Autism Spectrum Disorder (ASD) in recognizing facial expressions. Comparison has been conducted on three supervised learning algorithms EigenFace, FisherFace, and LBPH. The JAFFE, CK+, and TFEID (I&II) datasets have been used to train and test the algorithms. The results were then evaluated based on variance, standard deviation, and accuracy. The experiments showed that FisherFace has the highest accuracy for all datasets and is considered the best algorithm to be implemented in our system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autism%20spectrum%20disorder" title="autism spectrum disorder">autism spectrum disorder</a>, <a href="https://publications.waset.org/abstracts/search?q=ASD" title=" ASD"> ASD</a>, <a href="https://publications.waset.org/abstracts/search?q=EigenFace" title=" EigenFace"> EigenFace</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20recognition" title=" facial expression recognition"> facial expression recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=FisherFace" title=" FisherFace"> FisherFace</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20binary%20pattern%20histogram" title=" local binary pattern histogram"> local binary pattern histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=LBPH" title=" LBPH"> LBPH</a> </p> <a href="https://publications.waset.org/abstracts/129718/analysis-and-detection-of-facial-expressions-in-autism-spectrum-disorder-people-using-machine-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129718.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">174</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5058</span> Data Collection Techniques for Robotics to Identify the Facial Expressions of Traumatic Brain Injured Patients</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chaudhary%20Muhammad%20Aqdus%20Ilyas">Chaudhary Muhammad Aqdus Ilyas</a>, <a href="https://publications.waset.org/abstracts/search?q=Matthias%20Rehm"> Matthias Rehm</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamal%20Nasrollahi"> Kamal Nasrollahi</a>, <a href="https://publications.waset.org/abstracts/search?q=Thomas%20B.%20Moeslund"> Thomas B. Moeslund</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the investigation of data collection procedures, associated with robots when placed with traumatic brain injured (TBI) patients for rehabilitation purposes through facial expression and mood analysis. Rehabilitation after TBI is very crucial due to nature of injury and variation in recovery time. It is advantageous to analyze these emotional signals in a contactless manner, due to the non-supportive behavior of patients, limited muscle movements and increase in negative emotional expressions. This work aims at the development of framework where robots can recognize TBI emotions through facial expressions to perform rehabilitation tasks by physical, cognitive or interactive activities. The result of these studies shows that with customized data collection strategies, proposed framework identify facial and emotional expressions more accurately that can be utilized in enhancing recovery treatment and social interaction in robotic context. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network-%20long%20short%20term%20memory%20network%20%28CNN-LSTM%29" title=" convolution neural network- long short term memory network (CNN-LSTM)"> convolution neural network- long short term memory network (CNN-LSTM)</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20and%20mood%20recognition" title=" facial expression and mood recognition"> facial expression and mood recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20%28RGB-thermal%29%20analysis" title=" multimodal (RGB-thermal) analysis"> multimodal (RGB-thermal) analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=rehabilitation" title=" rehabilitation"> rehabilitation</a>, <a href="https://publications.waset.org/abstracts/search?q=robots" title=" robots"> robots</a>, <a href="https://publications.waset.org/abstracts/search?q=traumatic%20brain%20injured%20patients" title=" traumatic brain injured patients"> traumatic brain injured patients</a> </p> <a href="https://publications.waset.org/abstracts/98560/data-collection-techniques-for-robotics-to-identify-the-facial-expressions-of-traumatic-brain-injured-patients" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98560.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5057</span> Emotion Recognition in Video and Images in the Wild</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Faizan%20Tariq">Faizan Tariq</a>, <a href="https://publications.waset.org/abstracts/search?q=Moayid%20Ali%20Zaidi"> Moayid Ali Zaidi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial emotion recognition algorithms are expanding rapidly now a day. People are using different algorithms with different combinations to generate best results. There are six basic emotions which are being studied in this area. Author tried to recognize the facial expressions using object detector algorithms instead of traditional algorithms. Two object detection algorithms were chosen which are Faster R-CNN and YOLO. For pre-processing we used image rotation and batch normalization. The dataset I have chosen for the experiments is Static Facial Expression in Wild (SFEW). Our approach worked well but there is still a lot of room to improve it, which will be a future direction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title="face recognition">face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title=" emotion recognition"> emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a> </p> <a href="https://publications.waset.org/abstracts/152635/emotion-recognition-in-video-and-images-in-the-wild" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152635.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5056</span> Forensic Comparison of Facial Images for Human Identification </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20P.%20Gangwar">D. P. Gangwar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Identification of human through facial images has got great importance in forensic science. The video recordings, CCTV footage, passports, driver licenses and other related documents are invariably sent to the laboratory for comparison of the questioned photographs as well as video recordings with suspected photographs/recordings to prove the identity of a person. More than 300 questioned and 300 control photographs received in actual crime cases, received from various investigation agencies, have been compared by me so far using various familiar analysis and comparison techniques such as Holistic comparison, Morphological analysis, Photo-anthropometry and superimposition. On the basis of findings obtained during the examination huge photo exhibits, a realistic and comprehensive technique has been proposed which could be very useful for forensic. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CCTV%20Images" title="CCTV Images">CCTV Images</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20features" title=" facial features"> facial features</a>, <a href="https://publications.waset.org/abstracts/search?q=photo-anthropometry" title=" photo-anthropometry"> photo-anthropometry</a>, <a href="https://publications.waset.org/abstracts/search?q=superimposition" title=" superimposition"> superimposition</a> </p> <a href="https://publications.waset.org/abstracts/31353/forensic-comparison-of-facial-images-for-human-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31353.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">529</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5055</span> Facial Emotion Recognition Using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashutosh%20Mishra">Ashutosh Mishra</a>, <a href="https://publications.waset.org/abstracts/search?q=Nikhil%20Goyal"> Nikhil Goyal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A 3D facial emotion recognition model based on deep learning is proposed in this paper. Two convolution layers and a pooling layer are employed in the deep learning architecture. After the convolution process, the pooling is finished. The probabilities for various classes of human faces are calculated using the sigmoid activation function. To verify the efficiency of deep learning-based systems, a set of faces. The Kaggle dataset is used to verify the accuracy of a deep learning-based face recognition model. The model's accuracy is about 65 percent, which is lower than that of other facial expression recognition techniques. Despite significant gains in representation precision due to the nonlinearity of profound image representations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title="facial recognition">facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20intelligence" title=" computational intelligence"> computational intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20map" title=" depth map"> depth map</a> </p> <a href="https://publications.waset.org/abstracts/139253/facial-emotion-recognition-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139253.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">231</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5054</span> Individualized Emotion Recognition Through Dual-Representations and Ground-Established Ground Truth</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Valentina%20Zhang">Valentina Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> While facial expression is a complex and individualized behavior, all facial emotion recognition (FER) systems known to us rely on a single facial representation and are trained on universal data. We conjecture that: (i) different facial representations can provide different, sometimes complementing views of emotions; (ii) when employed collectively in a discussion group setting, they enable more accurate emotion reading which is highly desirable in autism care and other applications context sensitive to errors. In this paper, we first study FER using pixel-based DL vs semantics-based DL in the context of deepfake videos. Our experiment indicates that while the semantics-trained model performs better with articulated facial feature changes, the pixel-trained model outperforms on subtle or rare facial expressions. Armed with these findings, we have constructed an adaptive FER system learning from both types of models for dyadic or small interacting groups and further leveraging the synthesized group emotions as the ground truth for individualized FER training. Using a collection of group conversation videos, we demonstrate that FER accuracy and personalization can benefit from such an approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neurodivergence%20care" title="neurodivergence care">neurodivergence care</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20emotion%20recognition" title=" facial emotion recognition"> facial emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=ground%20truth%20for%20supervised%20learning" title=" ground truth for supervised learning"> ground truth for supervised learning</a> </p> <a href="https://publications.waset.org/abstracts/144009/individualized-emotion-recognition-through-dual-representations-and-ground-established-ground-truth" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144009.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5053</span> Technological Approach in Question Formation for Assessment of Interviewees</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Shujan">S. Shujan</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20T.%20Rupasinghe"> A. T. Rupasinghe</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20L.%20Gunawardena"> N. L. Gunawardena</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Numerous studies have determined that there is a direct correlation between the successful interviewee and the nonverbal behavioral patterns of that person during the interview. In this study, we focus on formations of interview questions in such a way that, it gets an opportunity for assessing interviewee through the answers using the nonverbal behavioral cues. From all the nonverbal behavioral factors we have identified, in this study priority is given to the ‘facial expression variations’ with the assistance of facial expression analytics tool; this research proposes a novel approach in question formation for the assessment of interviewees in ‘Software Industry’. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=assessments" title="assessments">assessments</a>, <a href="https://publications.waset.org/abstracts/search?q=hirability" title=" hirability"> hirability</a>, <a href="https://publications.waset.org/abstracts/search?q=interviews" title=" interviews"> interviews</a>, <a href="https://publications.waset.org/abstracts/search?q=non-verbal%20behaviour%20patterns" title=" non-verbal behaviour patterns"> non-verbal behaviour patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=question%20formation" title=" question formation"> question formation</a> </p> <a href="https://publications.waset.org/abstracts/41604/technological-approach-in-question-formation-for-assessment-of-interviewees" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41604.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">318</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5052</span> 3D Human Face Reconstruction in Unstable Conditions </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiaoyuan%20Suo">Xiaoyuan Suo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> 3D object reconstruction is a broad research area within the computer vision field involving many stages and still open problems. One of the existing challenges in this field lies with micromotion, such as the facial expressions on the appearance of the human or animal face. Similar literatures in this field focuses on 3D reconstruction in stable conditions such as an existing image or photos taken in a rather static environment, while the purpose of this work is to discuss a flexible scan system using multiple cameras that can correctly reconstruct 3D stable and moving objects -- human face with expression in particular. Further, a mathematical model is proposed at the end of this literature to automate the 3D object reconstruction process. The reconstruction process takes several stages. Firstly, a set of simple 2D lines would be projected onto the object and hence a set of uneven curvy lines can be obtained, which represents the 3D numerical data of the surface. The lines and their shapes will help to identify object’s 3D construction in pixels. With the two-recorded angles and their distance from the camera, a simple mathematical calculation would give the resulting coordinate of each projected line in an absolute 3D space. This proposed research will benefit many practical areas, including but not limited to biometric identification, authentications, cybersecurity, preservation of cultural heritage, drama acting especially those with rapid and complex facial gestures, and many others. Specifically, this will (I) provide a brief survey of comparable techniques existing in this field. (II) discuss a set of specialized methodologies or algorithms for effective reconstruction of 3D objects. (III)implement, and testing the developed methodologies. (IV) verify findings with data collected from experiments. (V) conclude with lessons learned and final thoughts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20photogrammetry" title="3D photogrammetry">3D photogrammetry</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20object%20reconstruction" title=" 3D object reconstruction"> 3D object reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20recognition" title=" facial expression recognition"> facial expression recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a> </p> <a href="https://publications.waset.org/abstracts/92745/3d-human-face-reconstruction-in-unstable-conditions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92745.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">150</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5051</span> Management of Facial Nerve Palsy Following Physiotherapy </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bassam%20Band">Bassam Band</a>, <a href="https://publications.waset.org/abstracts/search?q=Simon%20Freeman"> Simon Freeman</a>, <a href="https://publications.waset.org/abstracts/search?q=Rohan%20Munir"> Rohan Munir</a>, <a href="https://publications.waset.org/abstracts/search?q=Hisham%20Band"> Hisham Band</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objective: To determine efficacy of facial physiotherapy provided for patients with facial nerve palsy. Design: Retrospective study Subjects: 54 patients diagnosed with Facial nerve palsy were included in the study after they met the selection criteria including unilateral facial paralysis and start of therapy twelve months after the onset of facial nerve palsy. Interventions: Patients received the treatment offered at a facial physiotherapy clinic consisting of: Trophic electrical stimulation, surface electromyography with biofeedback, neuromuscular re-education and myofascial release. Main measures: The Sunnybrook facial grading scale was used to evaluate the severity of facial paralysis. Results: This study demonstrated the positive impact of physiotherapy for patient with facial nerve palsy with improvement of 24.2% on the Sunnybrook facial grading score from a mean baseline of 34.2% to 58.2%. The greatest improvement looking at different causes was seen in patient who had reconstructive surgery post Acoustic Neuroma at 31.3%. Conclusion: The therapy shows significant improvement for patients with facial nerve palsy even when started 12 months post onset of paralysis across different causes. This highlights the benefit of this non-invasive technique in managing facial nerve paralysis and possibly preventing the need for surgery. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20nerve%20palsy" title="facial nerve palsy">facial nerve palsy</a>, <a href="https://publications.waset.org/abstracts/search?q=treatment" title=" treatment"> treatment</a>, <a href="https://publications.waset.org/abstracts/search?q=physiotherapy" title=" physiotherapy"> physiotherapy</a>, <a href="https://publications.waset.org/abstracts/search?q=bells%20palsy" title=" bells palsy"> bells palsy</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic%20neuroma" title=" acoustic neuroma"> acoustic neuroma</a>, <a href="https://publications.waset.org/abstracts/search?q=ramsey-hunt%20syndrome" title=" ramsey-hunt syndrome"> ramsey-hunt syndrome</a> </p> <a href="https://publications.waset.org/abstracts/19940/management-of-facial-nerve-palsy-following-physiotherapy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19940.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">535</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5050</span> Automatic Facial Skin Segmentation Using Possibilistic C-Means Algorithm for Evaluation of Facial Surgeries</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elham%20Alaee">Elham Alaee</a>, <a href="https://publications.waset.org/abstracts/search?q=Mousa%20Shamsi"> Mousa Shamsi</a>, <a href="https://publications.waset.org/abstracts/search?q=Hossein%20Ahmadi"> Hossein Ahmadi</a>, <a href="https://publications.waset.org/abstracts/search?q=Soroosh%20Nazem"> Soroosh Nazem</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Hossein%20Sedaaghi"> Mohammad Hossein Sedaaghi </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human face has a fundamental role in the appearance of individuals. So the importance of facial surgeries is undeniable. Thus, there is a need for the appropriate and accurate facial skin segmentation in order to extract different features. Since Fuzzy C-Means (FCM) clustering algorithm doesn’t work appropriately for noisy images and outliers, in this paper we exploit Possibilistic C-Means (PCM) algorithm in order to segment the facial skin. For this purpose, first, we convert facial images from RGB to YCbCr color space. To evaluate performance of the proposed algorithm, the database of Sahand University of Technology, Tabriz, Iran was used. In order to have a better understanding from the proposed algorithm; FCM and Expectation-Maximization (EM) algorithms are also used for facial skin segmentation. The proposed method shows better results than the other segmentation methods. Results include misclassification error (0.032) and the region’s area error (0.045) for the proposed algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20image" title="facial image">facial image</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=PCM" title=" PCM"> PCM</a>, <a href="https://publications.waset.org/abstracts/search?q=FCM" title=" FCM"> FCM</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20error" title=" skin error"> skin error</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20surgery" title=" facial surgery"> facial surgery</a> </p> <a href="https://publications.waset.org/abstracts/10297/automatic-facial-skin-segmentation-using-possibilistic-c-means-algorithm-for-evaluation-of-facial-surgeries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10297.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">586</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5049</span> A Robust Spatial Feature Extraction Method for Facial Expression Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.%20G.%20C.%20P.%20Dinesh">H. G. C. P. Dinesh</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20Tharshini"> G. Tharshini</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20P.%20B.%20Ekanayake"> M. P. B. Ekanayake</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20M.%20R.%20I.%20Godaliyadda"> G. M. R. I. Godaliyadda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a new spatial feature extraction method based on principle component analysis (PCA) and Fisher Discernment Analysis (FDA) for facial expression recognition. It not only extracts reliable features for classification, but also reduces the feature space dimensions of pattern samples. In this method, first each gray scale image is considered in its entirety as the measurement matrix. Then, principle components (PCs) of row vectors of this matrix and variance of these row vectors along PCs are estimated. Therefore, this method would ensure the preservation of spatial information of the facial image. Afterwards, by incorporating the spectral information of the eigen-filters derived from the PCs, a feature vector was constructed, for a given image. Finally, FDA was used to define a set of basis in a reduced dimension subspace such that the optimal clustering is achieved. The method of FDA defines an inter-class scatter matrix and intra-class scatter matrix to enhance the compactness of each cluster while maximizing the distance between cluster marginal points. In order to matching the test image with the training set, a cosine similarity based Bayesian classification was used. The proposed method was tested on the Cohn-Kanade database and JAFFE database. It was observed that the proposed method which incorporates spatial information to construct an optimal feature space outperforms the standard PCA and FDA based methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20recognition" title="facial expression recognition">facial expression recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=principle%20component%20analysis%20%28PCA%29" title=" principle component analysis (PCA)"> principle component analysis (PCA)</a>, <a href="https://publications.waset.org/abstracts/search?q=fisher%20discernment%20analysis%20%28FDA%29" title=" fisher discernment analysis (FDA)"> fisher discernment analysis (FDA)</a>, <a href="https://publications.waset.org/abstracts/search?q=eigen-filter" title=" eigen-filter"> eigen-filter</a>, <a href="https://publications.waset.org/abstracts/search?q=cosine%20similarity" title=" cosine similarity"> cosine similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=bayesian%20classifier" title=" bayesian classifier"> bayesian classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=f-measure" title=" f-measure"> f-measure</a> </p> <a href="https://publications.waset.org/abstracts/36459/a-robust-spatial-feature-extraction-method-for-facial-expression-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36459.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">425</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5048</span> A Theoretical Study on Pain Assessment through Human Facial Expresion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mrinal%20Kanti%20Bhowmik">Mrinal Kanti Bhowmik</a>, <a href="https://publications.waset.org/abstracts/search?q=Debanjana%20Debnath%20Jr."> Debanjana Debnath Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Debotosh%20Bhattacharjee"> Debotosh Bhattacharjee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A facial expression is undeniably the human manners. It is a significant channel for human communication and can be applied to extract emotional features accurately. People in pain often show variations in facial expressions that are readily observable to others. A core of actions is likely to occur or to increase in intensity when people are in pain. To illustrate the changes in the facial appearance, a system known as Facial Action Coding System (FACS) is pioneered by Ekman and Friesen for human observers. According to Prkachin and Solomon, a set of such actions carries the bulk of information about pain. Thus, the Prkachin and Solomon pain intensity (PSPI) metric is defined. So, it is very important to notice that facial expressions, being a behavioral source in communication media, provide an important opening into the issues of non-verbal communication in pain. People express their pain in many ways, and this pain behavior is the basis on which most inferences about pain are drawn in clinical and research settings. Hence, to understand the roles of different pain behaviors, it is essential to study the properties. For the past several years, the studies are concentrated on the properties of one specific form of pain behavior i.e. facial expression. This paper represents a comprehensive study on pain assessment that can model and estimate the intensity of pain that the patient is suffering. It also reviews the historical background of different pain assessment techniques in the context of painful expressions. Different approaches incorporate FACS from psychological views and a pain intensity score using the PSPI metric in pain estimation. This paper investigates in depth analysis of different approaches used in pain estimation and presents different observations found from each technique. It also offers a brief study on different distinguishing features of real and fake pain. Therefore, the necessity of the study lies in the emerging fields of painful face assessment in clinical settings. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20action%20coding%20system%20%28FACS%29" title="facial action coding system (FACS)">facial action coding system (FACS)</a>, <a href="https://publications.waset.org/abstracts/search?q=pain" title=" pain"> pain</a>, <a href="https://publications.waset.org/abstracts/search?q=pain%20behavior" title=" pain behavior"> pain behavior</a>, <a href="https://publications.waset.org/abstracts/search?q=Prkachin%20and%20Solomon%20pain%20intensity%20%28PSPI%29" title=" Prkachin and Solomon pain intensity (PSPI)"> Prkachin and Solomon pain intensity (PSPI)</a> </p> <a href="https://publications.waset.org/abstracts/22187/a-theoretical-study-on-pain-assessment-through-human-facial-expresion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22187.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">346</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5047</span> Context-Aware Recommender Systems Using User&#039;s Emotional State</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hoyeon%20Park">Hoyeon Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Kyoung-jae%20Kim"> Kyoung-jae Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The product recommendation is a field of research that has received much attention in the recent information overload phenomenon. The proliferation of the mobile environment and social media cannot help but affect the results of the recommendation depending on how the factors of the user's situation are reflected in the recommendation process. Recently, research has been spreading attention to the context-aware recommender system which is to reflect user's contextual information in the recommendation process. However, until now, most of the context-aware recommender system researches have been limited in that they reflect the passive context of users. It is expected that the user will be able to express his/her contextual information through his/her active behavior and the importance of the context-aware recommender system reflecting this information can be increased. The purpose of this study is to propose a context-aware recommender system that can reflect the user's emotional state as an active context information to recommendation process. The context-aware recommender system is a recommender system that can make more sophisticated recommendations by utilizing the user's contextual information and has an advantage that the user's emotional factor can be considered as compared with the existing recommender systems. In this study, we propose a method to infer the user's emotional state, which is one of the user's context information, by using the user's facial expression data and to reflect it on the recommendation process. This study collects the facial expression data of a user who is looking at a specific product and the user's product preference score. Then, we classify the facial expression data into several categories according to the previous research and construct a model that can predict them. Next, the predicted results are applied to existing collaborative filtering with contextual information. As a result of the study, it was shown that the recommended results of the context-aware recommender system including facial expression information show improved results in terms of recommendation performance. Based on the results of this study, it is expected that future research will be conducted on recommender system reflecting various contextual information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=context-aware" title="context-aware">context-aware</a>, <a href="https://publications.waset.org/abstracts/search?q=emotional%20state" title=" emotional state"> emotional state</a>, <a href="https://publications.waset.org/abstracts/search?q=recommender%20systems" title=" recommender systems"> recommender systems</a>, <a href="https://publications.waset.org/abstracts/search?q=business%20analytics" title=" business analytics"> business analytics</a> </p> <a href="https://publications.waset.org/abstracts/88567/context-aware-recommender-systems-using-users-emotional-state" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88567.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">229</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5046</span> Quantification and Preference of Facial Asymmetry of the Sub-Saharan Africans&#039; 3D Facial Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anas%20Ibrahim%20Yahaya">Anas Ibrahim Yahaya</a>, <a href="https://publications.waset.org/abstracts/search?q=Christophe%20Soligo"> Christophe Soligo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A substantial body of literature has reported on facial symmetry and asymmetry and their role in human mate choice. However, major gaps persist, with nearly all data originating from the WEIRD (Western, Educated, Industrialised, Rich and Developed) populations, and results remaining largely equivocal when compared across studies. This study is aimed at quantifying facial asymmetry from the 3D faces of the Hausa of northern Nigeria and also aimed at determining their (Hausa) perceptions and judgements of standardised facial images with different levels of asymmetry using questionnaires. Data were analysed using R-studio software and results indicated that individuals with lower levels of facial asymmetry (near facial symmetry) were perceived as more attractive, more suitable as marriage partners and more caring, whereas individuals with higher levels of facial asymmetry were perceived as more aggressive. The study conclusively asserts that all faces are asymmetric including the most beautiful ones, and the preference of less asymmetric faces was not just dependent on single facial trait, but rather on multiple facial traits; thus the study supports that physical attractiveness is not just an arbitrary social construct, but at least in part a cue to general health and possibly related to environmental context. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face" title="face">face</a>, <a href="https://publications.waset.org/abstracts/search?q=asymmetry" title=" asymmetry"> asymmetry</a>, <a href="https://publications.waset.org/abstracts/search?q=symmetry" title=" symmetry"> symmetry</a>, <a href="https://publications.waset.org/abstracts/search?q=Hausa" title=" Hausa"> Hausa</a>, <a href="https://publications.waset.org/abstracts/search?q=preference" title=" preference"> preference</a> </p> <a href="https://publications.waset.org/abstracts/82975/quantification-and-preference-of-facial-asymmetry-of-the-sub-saharan-africans-3d-facial-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82975.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">193</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5045</span> Difficulties in the Emotional Processing of Intimate Partner Violence Perpetrators</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Javier%20Comes%20Fayos">Javier Comes Fayos</a>, <a href="https://publications.waset.org/abstracts/search?q=Isabel%20Rodr%C3%ADGuez%20Moreno"> Isabel RodríGuez Moreno</a>, <a href="https://publications.waset.org/abstracts/search?q=Sara%20Bressanutti"> Sara Bressanutti</a>, <a href="https://publications.waset.org/abstracts/search?q=Marisol%20Lila"> Marisol Lila</a>, <a href="https://publications.waset.org/abstracts/search?q=Angel%20Romero%20Mart%C3%ADNez"> Angel Romero MartíNez</a>, <a href="https://publications.waset.org/abstracts/search?q=Luis%20Moya%20Albiol"> Luis Moya Albiol</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Given the great impact produced by gender-based violence, its comprehensive approach seems essential. Consequently, research has focused on risk factors for violent behaviour, linking various psychosocial variables, as well as cognitive and neuropsychological deficits with the aggressors. However, studies on affective processing are scarce, so the present study investigates possible emotional alterations in men convicted of gender violence. The participants were 51 aggressors, who attended the CONTEXTO program with sentences of less than two years, and 47 men with no history of violence. The sample did not differ in age, socioeconomic level, education, or alcohol and other substances consumption. Anger, alexithymia and facial recognition of other people´s emotions were assessed through the State-Trait Anger Expression Inventory (STAXI-2), the Toronto Alexithymia Scale (TAS-20) and Reading the mind in the eyes (REM), respectively. Men convicted of gender-based violence showed higher scores on the anger trait and temperament dimensions, as well as on the anger expression index. They also scored higher on alexithymia and in the identification and emotional expression subscales. In addition, they showed greater difficulties in the facial recognition of emotions by having a lower score in the REM. These results seem to show difficulties in different affective areas in men condemned for gender violence. The deficits are reflected in greater difficulty in identifying and expressing emotions, in processing anger and in recognizing the emotions of others. All these difficulties have been related to the use of violent behavior. Consequently, it is essential and necessary to include emotional regulation in intervention programs for men who have been convicted of gender-based violence. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=alexithymia" title="alexithymia">alexithymia</a>, <a href="https://publications.waset.org/abstracts/search?q=anger" title=" anger"> anger</a>, <a href="https://publications.waset.org/abstracts/search?q=emotional%20processing" title=" emotional processing"> emotional processing</a>, <a href="https://publications.waset.org/abstracts/search?q=emotional%20recognition" title=" emotional recognition"> emotional recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=empathy" title=" empathy"> empathy</a>, <a href="https://publications.waset.org/abstracts/search?q=intimate%20partner%20violence" title=" intimate partner violence"> intimate partner violence</a> </p> <a href="https://publications.waset.org/abstracts/135112/difficulties-in-the-emotional-processing-of-intimate-partner-violence-perpetrators" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135112.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">199</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5044</span> Facial Behavior Modifications Following the Diffusion of the Use of Protective Masks Due to COVID-19</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andreas%20Aceranti">Andreas Aceranti</a>, <a href="https://publications.waset.org/abstracts/search?q=Simonetta%20Vernocchi"> Simonetta Vernocchi</a>, <a href="https://publications.waset.org/abstracts/search?q=Marco%20Colorato"> Marco Colorato</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Zaccariello"> Daniel Zaccariello</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Our study explores the usefulness of implementing facial expression recognition capabilities and using the Facial Action Coding System (FACS) in contexts where the other person is wearing a mask. In the communication process, the subjects use a plurality of distinct and autonomous reporting systems. Among them, the system of mimicking facial movements is worthy of attention. Basic emotion theorists have identified the existence of specific and universal patterns of facial expressions related to seven basic emotions -anger, disgust, contempt, fear, sadness, surprise, and happiness- that would distinguish one emotion from another. However, due to the COVID-19 pandemic, we have come up against the problem of having the lower half of the face covered and, therefore, not investigable due to the masks. Facial-emotional behavior is a good starting point for understanding: (1) the affective state (such as emotions), (2) cognitive activity (perplexity, concentration, boredom), (3) temperament and personality traits (hostility, sociability, shyness), (4) psychopathology (such as diagnostic information relevant to depression, mania, schizophrenia, and less severe disorders), (5) psychopathological processes that occur during social interactions patient and analyst. There are numerous methods to measure facial movements resulting from the action of muscles, see for example, the measurement of visible facial actions using coding systems (non-intrusive systems that require the presence of an observer who encodes and categorizes behaviors) and the measurement of electrical "discharges" of contracting muscles (facial electromyography; EMG). However, the measuring system invented by Ekman and Friesen (2002) - "Facial Action Coding System - FACS" is the most comprehensive, complete, and versatile. Our study, carried out on about 1,500 subjects over three years of work, allowed us to highlight how the movements of the hands and upper part of the face change depending on whether the subject wears a mask or not. We have been able to identify specific alterations to the subjects’ hand movement patterns and their upper face expressions while wearing masks compared to when not wearing them. We believe that finding correlations between how body language changes when our facial expressions are impaired can provide a better understanding of the link between the face and body non-verbal language. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20action%20coding%20system" title="facial action coding system">facial action coding system</a>, <a href="https://publications.waset.org/abstracts/search?q=COVID-19" title=" COVID-19"> COVID-19</a>, <a href="https://publications.waset.org/abstracts/search?q=masks" title=" masks"> masks</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20analysis" title=" facial analysis"> facial analysis</a> </p> <a href="https://publications.waset.org/abstracts/160896/facial-behavior-modifications-following-the-diffusion-of-the-use-of-protective-masks-due-to-covid-19" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160896.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">77</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5043</span> Somatosensory-Evoked Blink Reflex in Peripheral Facial Palsy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sarah%20Sayed%20El-%20Tawab">Sarah Sayed El- Tawab</a>, <a href="https://publications.waset.org/abstracts/search?q=Emmanuel%20Kamal%20Azix%20Saba"> Emmanuel Kamal Azix Saba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objectives: Somatosensory blink reflex (SBR) is an eye blink response obtained from electrical stimulation of peripheral nerves or skin area of the body. It has been studied in various neurological diseases as well as among healthy subjects in different population. We designed this study to detect SBR positivity in patients with facial palsy and patients with post facial syndrome, to relate the facial palsy severity and the presence of SBR, and to associate between trigeminal BR changes and SBR positivity in peripheral facial palsy patients. Methods: 50 patients with peripheral facial palsy and post-facial syndrome 31 age and gender matched healthy volunteers were enrolled to this study. Facial motor conduction studies, trigeminal BR, and SBR were studied in all. Results: SBR was elicited in 67.7% of normal subjects, in 68% of PFS group, and in 32% of PFP group. On the non-paralytic side SBR was found in 28% by paralyzed side stimulation and in 24% by healthy side stimulation among PFP patients. For PFS group SBR was found on the non- paralytic side in 48%. Bilateral SBR elicitability was higher than its unilateral elicitability. Conclusion: Increased brainstem interneurons excitability is not essential to generate SBR. The hypothetical sensory-motor gating mechanism is responsible for SBR generation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=somatosensory%20evoked%20blink%20reflex" title="somatosensory evoked blink reflex">somatosensory evoked blink reflex</a>, <a href="https://publications.waset.org/abstracts/search?q=post%20facial%20syndrome" title=" post facial syndrome"> post facial syndrome</a>, <a href="https://publications.waset.org/abstracts/search?q=blink%20reflex" title=" blink reflex"> blink reflex</a>, <a href="https://publications.waset.org/abstracts/search?q=enchanced%20gain" title=" enchanced gain"> enchanced gain</a> </p> <a href="https://publications.waset.org/abstracts/18913/somatosensory-evoked-blink-reflex-in-peripheral-facial-palsy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18913.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">619</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5042</span> A Neuron Model of Facial Recognition and Detection of an Authorized Entity Using Machine Learning System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20K.%20Adedeji">J. K. Adedeji</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20O.%20Oyekanmi"> M. O. Oyekanmi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper has critically examined the use of Machine Learning procedures in curbing unauthorized access into valuable areas of an organization. The use of passwords, pin codes, user&rsquo;s identification in recent times has been partially successful in curbing crimes involving identities, hence the need for the design of a system which incorporates biometric characteristics such as DNA and pattern recognition of variations in facial expressions. The facial model used is the OpenCV library which is based on the use of certain physiological features, the Raspberry Pi 3 module is used to compile the OpenCV library, which extracts and stores the detected faces into the datasets directory through the use of camera. The model is trained with 50 epoch run in the database and recognized by the Local Binary Pattern Histogram (LBPH) recognizer contained in the OpenCV. The training algorithm used by the neural network is back propagation coded using python algorithmic language with 200 epoch runs to identify specific resemblance in the exclusive OR (XOR) output neurons. The research however confirmed that physiological parameters are better effective measures to curb crimes relating to identities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometric%20characters" title="biometric characters">biometric characters</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title=" OpenCV"> OpenCV</a> </p> <a href="https://publications.waset.org/abstracts/93018/a-neuron-model-of-facial-recognition-and-detection-of-an-authorized-entity-using-machine-learning-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93018.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">256</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification&amp;page=169">169</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification&amp;page=170">170</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10