CINXE.COM

Search results for: facial landmarks

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: facial landmarks</title> <meta name="description" content="Search results for: facial landmarks"> <meta name="keywords" content="facial landmarks"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="facial landmarks" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="facial landmarks"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 347</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: facial landmarks</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">347</span> Emotion Recognition with Occlusions Based on Facial Expression Reconstruction and Weber Local Descriptor</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jadisha%20Cornejo">Jadisha Cornejo</a>, <a href="https://publications.waset.org/abstracts/search?q=Helio%20Pedrini"> Helio Pedrini</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recognition of emotions based on facial expressions has received increasing attention from the scientific community over the last years. Several fields of applications can benefit from facial emotion recognition, such as behavior prediction, interpersonal relations, human-computer interactions, recommendation systems. In this work, we develop and analyze an emotion recognition framework based on facial expressions robust to occlusions through the Weber Local Descriptor (WLD). Initially, the occluded facial expressions are reconstructed following an extension approach of Robust Principal Component Analysis (RPCA). Then, WLD features are extracted from the facial expression representation, as well as Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG). The feature vector space is reduced using Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). Finally, K-Nearest Neighbor (K-NN) and Support Vector Machine (SVM) classifiers are used to recognize the expressions. Experimental results on three public datasets demonstrated that the WLD representation achieved competitive accuracy rates for occluded and non-occluded facial expressions compared to other approaches available in the literature. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title="emotion recognition">emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20expression" title=" facial expression"> facial expression</a>, <a href="https://publications.waset.org/abstracts/search?q=occlusion" title=" occlusion"> occlusion</a>, <a href="https://publications.waset.org/abstracts/search?q=fiducial%20landmarks" title=" fiducial landmarks"> fiducial landmarks</a> </p> <a href="https://publications.waset.org/abstracts/90510/emotion-recognition-with-occlusions-based-on-facial-expression-reconstruction-and-weber-local-descriptor" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/90510.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">182</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">346</span> A Study on Inference from Distance Variables in Hedonic Regression</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yan%20Wang">Yan Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yasushi%20Asami"> Yasushi Asami</a>, <a href="https://publications.waset.org/abstracts/search?q=Yukio%20Sadahiro"> Yukio Sadahiro</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In urban area, several landmarks may affect housing price and rents, hedonic analysis should employ distance variables corresponding to each landmarks. Unfortunately, the effects of distances to landmarks on housing prices are generally not consistent with the true price. These distance variables may cause magnitude error in regression, pointing a problem of spatial multicollinearity. In this paper, we provided some approaches for getting the samples with less bias and method on locating the specific sampling area to avoid the multicollinerity problem in two specific landmarks case. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=landmarks" title="landmarks">landmarks</a>, <a href="https://publications.waset.org/abstracts/search?q=hedonic%20regression" title=" hedonic regression"> hedonic regression</a>, <a href="https://publications.waset.org/abstracts/search?q=distance%20variables" title=" distance variables"> distance variables</a>, <a href="https://publications.waset.org/abstracts/search?q=collinearity" title=" collinearity"> collinearity</a>, <a href="https://publications.waset.org/abstracts/search?q=multicollinerity" title=" multicollinerity"> multicollinerity</a> </p> <a href="https://publications.waset.org/abstracts/17025/a-study-on-inference-from-distance-variables-in-hedonic-regression" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/17025.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">452</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">345</span> A Geometric Based Hybrid Approach for Facial Feature Localization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Priya%20Saha">Priya Saha</a>, <a href="https://publications.waset.org/abstracts/search?q=Sourav%20Dey%20Roy%20Jr."> Sourav Dey Roy Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Debotosh%20Bhattacharjee"> Debotosh Bhattacharjee</a>, <a href="https://publications.waset.org/abstracts/search?q=Mita%20Nasipuri"> Mita Nasipuri</a>, <a href="https://publications.waset.org/abstracts/search?q=Barin%20Kumar%20De"> Barin Kumar De</a>, <a href="https://publications.waset.org/abstracts/search?q=Mrinal%20Kanti%20Bhowmik"> Mrinal Kanti Bhowmik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometric face recognition technology (FRT) has gained a lot of attention due to its extensive variety of applications in both security and non-security perspectives. It has come into view to provide a secure solution in identification and verification of person identity. Although other biometric based methods like fingerprint scans, iris scans are available, FRT is verified as an efficient technology for its user-friendliness and contact freeness. Accurate facial feature localization plays an important role for many facial analysis applications including biometrics and emotion recognition. But, there are certain factors, which make facial feature localization a challenging task. On human face, expressions can be seen from the subtle movements of facial muscles and influenced by internal emotional states. These non-rigid facial movements cause noticeable alterations in locations of facial landmarks, their usual shapes, which sometimes create occlusions in facial feature areas making face recognition as a difficult problem. The paper proposes a new hybrid based technique for automatic landmark detection in both neutral and expressive frontal and near frontal face images. The method uses the concept of thresholding, sequential searching and other image processing techniques for locating the landmark points on the face. Also, a Graphical User Interface (GUI) based software is designed that could automatically detect 16 landmark points around eyes, nose and mouth that are mostly affected by the changes in facial muscles. The proposed system has been tested on widely used JAFFE and Cohn Kanade database. Also, the system is tested on DeitY-TU face database which is created in the Biometrics Laboratory of Tripura University under the research project funded by Department of Electronics & Information Technology, Govt. of India. The performance of the proposed method has been done in terms of error measure and accuracy. The method has detection rate of 98.82% on JAFFE database, 91.27% on Cohn Kanade database and 93.05% on DeitY-TU database. Also, we have done comparative study of our proposed method with other techniques developed by other researchers. This paper will put into focus emotion-oriented systems through AU detection in future based on the located features. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometrics" title="biometrics">biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20landmarks" title=" facial landmarks"> facial landmarks</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/22182/a-geometric-based-hybrid-approach-for-facial-feature-localization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22182.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">412</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">344</span> Testing the Impact of Landmarks on Navigation through the Use of Mobile-Based Games</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Demet%20Yesiltepe">Demet Yesiltepe</a>, <a href="https://publications.waset.org/abstracts/search?q=Ruth%20Dalton"> Ruth Dalton</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayse%20Ozbil"> Ayse Ozbil</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this paper is to understand the effect of landmarks on spatial navigation. For this study, a mobile-based virtual game, 'Sea Hero Quest' (SHQ), was used. At the beginning of the game, participants were asked to look at maps which included the specific locations of players and checkpoints. After the map disappeared, participants were asked to navigate a boat and find the checkpoints in a pre-given order. By analyzing this data, we aim to better understand an important component of cities, namely landmarks, on spatial navigation. Game levels were analyzed spatially and axial-based integration, choice and connectivity values of levels were calculated to make comparisons. To make this kind of a comparison, we focused on levels which include both local and global landmarks and levels which include only local landmarks. The most significant contribution of this study to urban design and planning fields is that it provides mounting evidence about the utility of landmarks and their roles in cities due to the fact that the game was played more than 2.5 million people. Moreover, by using these results, it can be possible to encourage cities with more global and local landmarks to have more identifiable/readable areas. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=landmarks" title="landmarks">landmarks</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile-based%20games" title=" mobile-based games"> mobile-based games</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20navigation" title=" spatial navigation"> spatial navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20environment" title=" virtual environment"> virtual environment</a> </p> <a href="https://publications.waset.org/abstracts/89074/testing-the-impact-of-landmarks-on-navigation-through-the-use-of-mobile-based-games" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/89074.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">368</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">343</span> Anthropometric Measurements of Facial Proportions in Azerbaijan Population</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nigar%20Sultanova">Nigar Sultanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial morphology is a constant topic of concern for clinicians. When anthropometric methods were introduced into clinical practice to quantify changes in the craniofacial framework, features distinguishing various ethnic group were discovered. Normative data of facial measurements are indispensable to precise determination of the degree of deviations from normal. Establish the reference range of facial proportions in Azerbaijan population by anthropometric measurements of craniofacial complex. The study group consisted of 350 healthy young subjects, 175 males and 175 females, 18 to 25 years of age, from 7 different regions of Azerbaijan. The anthropometric examination was performed according to L.Farkas's method with our modification. In order to determine the morphologic characteristics of seven regions of the craniofacial complex 42 anthropometric measurements were selected. The anthropometric examination. Included the usage of 33 anthropometric landmarks. The 80 indices of the facial proportions, suggested by Farkas and Munro, were calculated: head -10, face - 23, nose - 23, lips - 9, orbits - 11, ears - 4. The date base of the North American white population was used as a reference group. Anthropometric measurements of facial proportions in Azerbaijan population revealed a significant difference between mеn and womеn, according to sexual dimorphism. In comparison with North American whites, considerable differences of facial proportions were observed in the head, face, orbits, labio-oral, nose and ear region. However, in women of the Azerbaijani population, 29 out of 80 proportion indices were similar to the proportions of NAW women. In the men of the Azerbaijani population, 27 out of 80 proportion indices did not reveal a statistically significant difference from the proportions of NAW men. Estimation of the reference range of facial proportions in Azerbaijan population migth be helpful to formulate surgical plan in treatment of congenital or post-traumatic facial deformities successfully. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20morphology" title="facial morphology">facial morphology</a>, <a href="https://publications.waset.org/abstracts/search?q=anthropometry" title=" anthropometry"> anthropometry</a>, <a href="https://publications.waset.org/abstracts/search?q=indices%20of%20proportion" title=" indices of proportion"> indices of proportion</a>, <a href="https://publications.waset.org/abstracts/search?q=measurement" title=" measurement"> measurement</a> </p> <a href="https://publications.waset.org/abstracts/147472/anthropometric-measurements-of-facial-proportions-in-azerbaijan-population" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147472.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">117</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">342</span> Use of Computer and Machine Learning in Facial Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Neha%20Singh">Neha Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Ananya%20Arora"> Ananya Arora</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial expression measurement plays a crucial role in the identification of emotion. Facial expression plays a key role in psychophysiology, neural bases, and emotional disorder, to name a few. The Facial Action Coding System (FACS) has proven to be the most efficient and widely used of the various systems used to describe facial expressions. Coders can manually code facial expressions with FACS and, by viewing video-recorded facial behaviour at a specified frame rate and slow motion, can decompose into action units (AUs). Action units are the most minor visually discriminable facial movements. FACS explicitly differentiates between facial actions and inferences about what the actions mean. Action units are the fundamental unit of FACS methodology. It is regarded as the standard measure for facial behaviour and finds its application in various fields of study beyond emotion science. These include facial neuromuscular disorders, neuroscience, computer vision, computer graphics and animation, and face encoding for digital processing. This paper discusses the conceptual basis for FACS, a numerical listing of discrete facial movements identified by the system, the system's psychometric evaluation, and the software's recommended training requirements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20action" title="facial action">facial action</a>, <a href="https://publications.waset.org/abstracts/search?q=action%20units" title=" action units"> action units</a>, <a href="https://publications.waset.org/abstracts/search?q=coding" title=" coding"> coding</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/161142/use-of-computer-and-machine-learning-in-facial-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">106</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">341</span> Management of Facial Nerve Palsy Following Physiotherapy </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bassam%20Band">Bassam Band</a>, <a href="https://publications.waset.org/abstracts/search?q=Simon%20Freeman"> Simon Freeman</a>, <a href="https://publications.waset.org/abstracts/search?q=Rohan%20Munir"> Rohan Munir</a>, <a href="https://publications.waset.org/abstracts/search?q=Hisham%20Band"> Hisham Band</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objective: To determine efficacy of facial physiotherapy provided for patients with facial nerve palsy. Design: Retrospective study Subjects: 54 patients diagnosed with Facial nerve palsy were included in the study after they met the selection criteria including unilateral facial paralysis and start of therapy twelve months after the onset of facial nerve palsy. Interventions: Patients received the treatment offered at a facial physiotherapy clinic consisting of: Trophic electrical stimulation, surface electromyography with biofeedback, neuromuscular re-education and myofascial release. Main measures: The Sunnybrook facial grading scale was used to evaluate the severity of facial paralysis. Results: This study demonstrated the positive impact of physiotherapy for patient with facial nerve palsy with improvement of 24.2% on the Sunnybrook facial grading score from a mean baseline of 34.2% to 58.2%. The greatest improvement looking at different causes was seen in patient who had reconstructive surgery post Acoustic Neuroma at 31.3%. Conclusion: The therapy shows significant improvement for patients with facial nerve palsy even when started 12 months post onset of paralysis across different causes. This highlights the benefit of this non-invasive technique in managing facial nerve paralysis and possibly preventing the need for surgery. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20nerve%20palsy" title="facial nerve palsy">facial nerve palsy</a>, <a href="https://publications.waset.org/abstracts/search?q=treatment" title=" treatment"> treatment</a>, <a href="https://publications.waset.org/abstracts/search?q=physiotherapy" title=" physiotherapy"> physiotherapy</a>, <a href="https://publications.waset.org/abstracts/search?q=bells%20palsy" title=" bells palsy"> bells palsy</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic%20neuroma" title=" acoustic neuroma"> acoustic neuroma</a>, <a href="https://publications.waset.org/abstracts/search?q=ramsey-hunt%20syndrome" title=" ramsey-hunt syndrome"> ramsey-hunt syndrome</a> </p> <a href="https://publications.waset.org/abstracts/19940/management-of-facial-nerve-palsy-following-physiotherapy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19940.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">535</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">340</span> Monocular 3D Person Tracking AIA Demographic Classification and Projective Image Processing </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=McClain%20Thiel">McClain Thiel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection and localization has historically required two or more sensors due to the loss of information from 3D to 2D space, however, most surveillance systems currently in use in the real world only have one sensor per location. Generally, this consists of a single low-resolution camera positioned above the area under observation (mall, jewelry store, traffic camera). This is not sufficient for robust 3D tracking for applications such as security or more recent relevance, contract tracing. This paper proposes a lightweight system for 3D person tracking that requires no additional hardware, based on compressed object detection convolutional-nets, facial landmark detection, and projective geometry. This approach involves classifying the target into a demographic category and then making assumptions about the relative locations of facial landmarks from the demographic information, and from there using simple projective geometry and known constants to find the target's location in 3D space. Preliminary testing, although severely lacking, suggests reasonable success in 3D tracking under ideal conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=monocular%20distancing" title="monocular distancing">monocular distancing</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20analysis" title=" facial analysis"> facial analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20localization" title=" 3D localization "> 3D localization </a> </p> <a href="https://publications.waset.org/abstracts/129037/monocular-3d-person-tracking-aia-demographic-classification-and-projective-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129037.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">339</span> Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anoop%20T.%20R.">Anoop T. R.</a>, <a href="https://publications.waset.org/abstracts/search?q=Otman%20Basir"> Otman Basir</a>, <a href="https://publications.waset.org/abstracts/search?q=Robert%20F.%20Hess"> Robert F. Hess</a>, <a href="https://publications.waset.org/abstracts/search?q=Eileen%20E.%20Birch"> Eileen E. Birch</a>, <a href="https://publications.waset.org/abstracts/search?q=Brooke%20A.%20Koritala"> Brooke A. Koritala</a>, <a href="https://publications.waset.org/abstracts/search?q=Reed%20M.%20Jost"> Reed M. Jost</a>, <a href="https://publications.waset.org/abstracts/search?q=Becky%20Luu"> Becky Luu</a>, <a href="https://publications.waset.org/abstracts/search?q=David%20Stager"> David Stager</a>, <a href="https://publications.waset.org/abstracts/search?q=Ben%20Thompson"> Ben Thompson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=strabismus" title="strabismus">strabismus</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title=" deep neural networks"> deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20landmarks" title=" facial landmarks"> facial landmarks</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20alignment" title=" face alignment"> face alignment</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG%2016" title=" VGG 16"> VGG 16</a>, <a href="https://publications.waset.org/abstracts/search?q=mask%20R-CNN" title=" mask R-CNN"> mask R-CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=pupil%20coordinates" title=" pupil coordinates"> pupil coordinates</a>, <a href="https://publications.waset.org/abstracts/search?q=angle%20deviation" title=" angle deviation"> angle deviation</a>, <a href="https://publications.waset.org/abstracts/search?q=horizontal%20and%20vertical%20deviation" title=" horizontal and vertical deviation"> horizontal and vertical deviation</a> </p> <a href="https://publications.waset.org/abstracts/170835/detection-and-classification-strabismus-using-convolutional-neural-network-and-spatial-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170835.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">338</span> Automatic Facial Skin Segmentation Using Possibilistic C-Means Algorithm for Evaluation of Facial Surgeries</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elham%20Alaee">Elham Alaee</a>, <a href="https://publications.waset.org/abstracts/search?q=Mousa%20Shamsi"> Mousa Shamsi</a>, <a href="https://publications.waset.org/abstracts/search?q=Hossein%20Ahmadi"> Hossein Ahmadi</a>, <a href="https://publications.waset.org/abstracts/search?q=Soroosh%20Nazem"> Soroosh Nazem</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Hossein%20Sedaaghi"> Mohammad Hossein Sedaaghi </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human face has a fundamental role in the appearance of individuals. So the importance of facial surgeries is undeniable. Thus, there is a need for the appropriate and accurate facial skin segmentation in order to extract different features. Since Fuzzy C-Means (FCM) clustering algorithm doesn’t work appropriately for noisy images and outliers, in this paper we exploit Possibilistic C-Means (PCM) algorithm in order to segment the facial skin. For this purpose, first, we convert facial images from RGB to YCbCr color space. To evaluate performance of the proposed algorithm, the database of Sahand University of Technology, Tabriz, Iran was used. In order to have a better understanding from the proposed algorithm; FCM and Expectation-Maximization (EM) algorithms are also used for facial skin segmentation. The proposed method shows better results than the other segmentation methods. Results include misclassification error (0.032) and the region’s area error (0.045) for the proposed algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20image" title="facial image">facial image</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=PCM" title=" PCM"> PCM</a>, <a href="https://publications.waset.org/abstracts/search?q=FCM" title=" FCM"> FCM</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20error" title=" skin error"> skin error</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20surgery" title=" facial surgery"> facial surgery</a> </p> <a href="https://publications.waset.org/abstracts/10297/automatic-facial-skin-segmentation-using-possibilistic-c-means-algorithm-for-evaluation-of-facial-surgeries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10297.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">586</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">337</span> Quantification and Preference of Facial Asymmetry of the Sub-Saharan Africans&#039; 3D Facial Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anas%20Ibrahim%20Yahaya">Anas Ibrahim Yahaya</a>, <a href="https://publications.waset.org/abstracts/search?q=Christophe%20Soligo"> Christophe Soligo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A substantial body of literature has reported on facial symmetry and asymmetry and their role in human mate choice. However, major gaps persist, with nearly all data originating from the WEIRD (Western, Educated, Industrialised, Rich and Developed) populations, and results remaining largely equivocal when compared across studies. This study is aimed at quantifying facial asymmetry from the 3D faces of the Hausa of northern Nigeria and also aimed at determining their (Hausa) perceptions and judgements of standardised facial images with different levels of asymmetry using questionnaires. Data were analysed using R-studio software and results indicated that individuals with lower levels of facial asymmetry (near facial symmetry) were perceived as more attractive, more suitable as marriage partners and more caring, whereas individuals with higher levels of facial asymmetry were perceived as more aggressive. The study conclusively asserts that all faces are asymmetric including the most beautiful ones, and the preference of less asymmetric faces was not just dependent on single facial trait, but rather on multiple facial traits; thus the study supports that physical attractiveness is not just an arbitrary social construct, but at least in part a cue to general health and possibly related to environmental context. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face" title="face">face</a>, <a href="https://publications.waset.org/abstracts/search?q=asymmetry" title=" asymmetry"> asymmetry</a>, <a href="https://publications.waset.org/abstracts/search?q=symmetry" title=" symmetry"> symmetry</a>, <a href="https://publications.waset.org/abstracts/search?q=Hausa" title=" Hausa"> Hausa</a>, <a href="https://publications.waset.org/abstracts/search?q=preference" title=" preference"> preference</a> </p> <a href="https://publications.waset.org/abstracts/82975/quantification-and-preference-of-facial-asymmetry-of-the-sub-saharan-africans-3d-facial-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82975.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">193</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">336</span> Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marie%20Alaghband">Marie Alaghband</a>, <a href="https://publications.waset.org/abstracts/search?q=Niloofar%20Yousefi"> Niloofar Yousefi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ivan%20Garibay"> Ivan Garibay</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of &quot;sad&quot;, &quot;surprise&quot;, &quot;fear&quot;, &quot;angry&quot;, &quot;neutral&quot;, &quot;disgust&quot;, and &quot;happy&quot;. We also considered the &quot;None&quot; class if the image&rsquo;s facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=annotated%20facial%20expression%20dataset" title="annotated facial expression dataset">annotated facial expression dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sequenced%20facial%20expression%20dataset" title=" sequenced facial expression dataset"> sequenced facial expression dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title=" sign language recognition"> sign language recognition</a> </p> <a href="https://publications.waset.org/abstracts/129717/facial-expression-phoenix-feph-an-annotated-sequenced-dataset-for-facial-and-emotion-specified-expressions-in-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129717.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">335</span> Automatic Landmark Selection Based on Feature Clustering for Visual Autonomous Unmanned Aerial Vehicle Navigation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Paulo%20Fernando%20Silva%20Filho">Paulo Fernando Silva Filho</a>, <a href="https://publications.waset.org/abstracts/search?q=Elcio%20Hideiti%20Shiguemori"> Elcio Hideiti Shiguemori</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The selection of specific landmarks for an Unmanned Aerial Vehicles&rsquo; Visual Navigation systems based on Automatic Landmark Recognition has significant influence on the precision of the system&rsquo;s estimated position. At the same time, manual selection of the landmarks does not guarantee a high recognition rate, which would also result on a poor precision. This work aims to develop an automatic landmark selection that will take the image of the flight area and identify the best landmarks to be recognized by the Visual Navigation Landmark Recognition System. The criterion to select a landmark is based on features detected by ORB or AKAZE and edges information on each possible landmark. Results have shown that disposition of possible landmarks is quite different from the human perception. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering" title="clustering">clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=edges" title=" edges"> edges</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20points" title=" feature points"> feature points</a>, <a href="https://publications.waset.org/abstracts/search?q=landmark%20selection" title=" landmark selection"> landmark selection</a>, <a href="https://publications.waset.org/abstracts/search?q=X-means" title=" X-means"> X-means</a> </p> <a href="https://publications.waset.org/abstracts/91173/automatic-landmark-selection-based-on-feature-clustering-for-visual-autonomous-unmanned-aerial-vehicle-navigation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91173.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">279</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">334</span> Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vesna%20Kirandziska">Vesna Kirandziska</a>, <a href="https://publications.waset.org/abstracts/search?q=Nevena%20Ackovska"> Nevena Ackovska</a>, <a href="https://publications.waset.org/abstracts/search?q=Ana%20Madevska%20Bogdanova"> Ana Madevska Bogdanova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title="emotion recognition">emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/42384/comparing-emotion-recognition-from-voice-and-facial-data-using-time-invariant-features" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42384.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">315</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">333</span> Classifying Facial Expressions Based on a Motion Local Appearance Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fabiola%20M.%20Villalobos-Castaldi">Fabiola M. Villalobos-Castaldi</a>, <a href="https://publications.waset.org/abstracts/search?q=Nicol%C3%A1s%20C.%20Kemper"> Nicolás C. Kemper</a>, <a href="https://publications.waset.org/abstracts/search?q=Esther%20Rojas-Krugger"> Esther Rojas-Krugger</a>, <a href="https://publications.waset.org/abstracts/search?q=Laura%20G.%20Ram%C3%ADrez-S%C3%A1nchez"> Laura G. Ramírez-Sánchez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the classification results about exploring the combination of a motion based approach with a local appearance method to describe the facial motion caused by the muscle contractions and expansions that are presented in facial expressions. The proposed feature extraction method take advantage of the knowledge related to which parts of the face reflects the highest deformations, so we selected 4 specific facial regions at which the appearance descriptor were applied. The most common used approaches for feature extraction are the holistic and the local strategies. In this work we present the results of using a local appearance approach estimating the correlation coefficient to the 4 corresponding landmark-localized facial templates of the expression face related to the neutral face. The results let us to probe how the proposed motion estimation scheme based on the local appearance correlation computation can simply and intuitively measure the motion parameters for some of the most relevant facial regions and how these parameters can be used to recognize facial expressions automatically. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20recognition%20system" title="facial expression recognition system">facial expression recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=local-appearance%20method" title=" local-appearance method"> local-appearance method</a>, <a href="https://publications.waset.org/abstracts/search?q=motion-based%20approach" title=" motion-based approach"> motion-based approach</a> </p> <a href="https://publications.waset.org/abstracts/27632/classifying-facial-expressions-based-on-a-motion-local-appearance-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27632.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">413</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">332</span> Emotion Recognition Using Artificial Intelligence</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rahul%20Mohite">Rahul Mohite</a>, <a href="https://publications.waset.org/abstracts/search?q=Lahcen%20Ouarbya"> Lahcen Ouarbya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper focuses on the interplay between humans and computer systems and the ability of these systems to understand and respond to human emotions, including non-verbal communication. Current emotion recognition systems are based solely on either facial or verbal expressions. The limitation of these systems is that it requires large training data sets. The paper proposes a system for recognizing human emotions that combines both speech and emotion recognition. The system utilizes advanced techniques such as deep learning and image recognition to identify facial expressions and comprehend emotions. The results show that the proposed system, based on the combination of facial expression and speech, outperforms existing ones, which are based solely either on facial or verbal expressions. The proposed system detects human emotion with an accuracy of 86%, whereas the existing systems have an accuracy of 70% using verbal expression only and 76% using facial expression only. In this paper, the increasing significance and demand for facial recognition technology in emotion recognition are also discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20reputation" title="facial reputation">facial reputation</a>, <a href="https://publications.waset.org/abstracts/search?q=expression%20reputation" title=" expression reputation"> expression reputation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20gaining%20knowledge%20of" title=" deep gaining knowledge of"> deep gaining knowledge of</a>, <a href="https://publications.waset.org/abstracts/search?q=photo%20reputation" title=" photo reputation"> photo reputation</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20technology" title=" facial technology"> facial technology</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20processing" title=" sign processing"> sign processing</a>, <a href="https://publications.waset.org/abstracts/search?q=photo%20type" title=" photo type"> photo type</a> </p> <a href="https://publications.waset.org/abstracts/162386/emotion-recognition-using-artificial-intelligence" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162386.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">331</span> Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ksheeraj%20Sai%20Vepuri">Ksheeraj Sai Vepuri</a>, <a href="https://publications.waset.org/abstracts/search?q=Nada%20Attar"> Nada Attar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20recognittion" title="facial expression recognittion">facial expression recognittion</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20preprocessing" title=" image preprocessing"> image preprocessing</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a> </p> <a href="https://publications.waset.org/abstracts/130679/improving-the-performance-of-deep-learning-in-facial-emotion-recognition-with-image-sharpening" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130679.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">143</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">330</span> Somatosensory-Evoked Blink Reflex in Peripheral Facial Palsy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sarah%20Sayed%20El-%20Tawab">Sarah Sayed El- Tawab</a>, <a href="https://publications.waset.org/abstracts/search?q=Emmanuel%20Kamal%20Azix%20Saba"> Emmanuel Kamal Azix Saba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objectives: Somatosensory blink reflex (SBR) is an eye blink response obtained from electrical stimulation of peripheral nerves or skin area of the body. It has been studied in various neurological diseases as well as among healthy subjects in different population. We designed this study to detect SBR positivity in patients with facial palsy and patients with post facial syndrome, to relate the facial palsy severity and the presence of SBR, and to associate between trigeminal BR changes and SBR positivity in peripheral facial palsy patients. Methods: 50 patients with peripheral facial palsy and post-facial syndrome 31 age and gender matched healthy volunteers were enrolled to this study. Facial motor conduction studies, trigeminal BR, and SBR were studied in all. Results: SBR was elicited in 67.7% of normal subjects, in 68% of PFS group, and in 32% of PFP group. On the non-paralytic side SBR was found in 28% by paralyzed side stimulation and in 24% by healthy side stimulation among PFP patients. For PFS group SBR was found on the non- paralytic side in 48%. Bilateral SBR elicitability was higher than its unilateral elicitability. Conclusion: Increased brainstem interneurons excitability is not essential to generate SBR. The hypothetical sensory-motor gating mechanism is responsible for SBR generation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=somatosensory%20evoked%20blink%20reflex" title="somatosensory evoked blink reflex">somatosensory evoked blink reflex</a>, <a href="https://publications.waset.org/abstracts/search?q=post%20facial%20syndrome" title=" post facial syndrome"> post facial syndrome</a>, <a href="https://publications.waset.org/abstracts/search?q=blink%20reflex" title=" blink reflex"> blink reflex</a>, <a href="https://publications.waset.org/abstracts/search?q=enchanced%20gain" title=" enchanced gain"> enchanced gain</a> </p> <a href="https://publications.waset.org/abstracts/18913/somatosensory-evoked-blink-reflex-in-peripheral-facial-palsy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18913.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">619</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">329</span> Timescape-Based Panoramic View for Historic Landmarks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.%20Ali">H. Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Whitehead"> A. Whitehead</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Providing a panoramic view of famous landmarks around the world offers artistic and historic value for historians, tourists, and researchers. Exploring the history of famous landmarks by presenting a comprehensive view of a temporal panorama merged with geographical and historical information presents a unique challenge of dealing with images that span a long period, from the 1800&rsquo;s up to the present. This work presents the concept of temporal panorama through a timeline display of aligned historic and modern images for many famous landmarks. Utilization of this panorama requires a collection of hundreds of thousands of landmark images from the Internet comprised of historic images and modern images of the digital age. These images have to be classified for subset selection to keep the more suitable images that chronologically document a landmark&rsquo;s history. Processing of historic images captured using older analog technology under various different capturing conditions represents a big challenge when they have to be used with modern digital images. Successful processing of historic images to prepare them for next steps of temporal panorama creation represents an active contribution in cultural heritage preservation through the fulfillment of one of UNESCO goals in preservation and displaying famous worldwide landmarks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cultural%20heritage" title="cultural heritage">cultural heritage</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20registration" title=" image registration"> image registration</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20subset%20selection" title=" image subset selection"> image subset selection</a>, <a href="https://publications.waset.org/abstracts/search?q=registered%20image%20similarity" title=" registered image similarity"> registered image similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20panorama" title=" temporal panorama"> temporal panorama</a>, <a href="https://publications.waset.org/abstracts/search?q=timescapes" title=" timescapes"> timescapes</a> </p> <a href="https://publications.waset.org/abstracts/101930/timescape-based-panoramic-view-for-historic-landmarks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/101930.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">165</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">328</span> KSVD-SVM Approach for Spontaneous Facial Expression Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dawood%20Al%20Chanti">Dawood Al Chanti</a>, <a href="https://publications.waset.org/abstracts/search?q=Alice%20Caplier"> Alice Caplier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sparse representations of signals have received a great deal of attention in recent years. In this paper, the interest of using sparse representation as a mean for performing sparse discriminative analysis between spontaneous facial expressions is demonstrated. An automatic facial expressions recognition system is presented. It uses a KSVD-SVM approach which is made of three main stages: A pre-processing and feature extraction stage, which solves the problem of shared subspace distribution based on the random projection theory, to obtain low dimensional discriminative and reconstructive features; A dictionary learning and sparse coding stage, which uses the KSVD model to learn discriminative under or over dictionaries for sparse coding; Finally a classification stage, which uses a SVM classifier for facial expressions recognition. Our main concern is to be able to recognize non-basic affective states and non-acted expressions. Extensive experiments on the JAFFE static acted facial expressions database but also on the DynEmo dynamic spontaneous facial expressions database exhibit very good recognition rates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dictionary%20learning" title="dictionary learning">dictionary learning</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20projection" title=" random projection"> random projection</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20and%20spontaneous%20facial%20expression" title=" pose and spontaneous facial expression"> pose and spontaneous facial expression</a>, <a href="https://publications.waset.org/abstracts/search?q=sparse%20representation" title=" sparse representation"> sparse representation</a> </p> <a href="https://publications.waset.org/abstracts/51683/ksvd-svm-approach-for-spontaneous-facial-expression-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51683.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">305</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">327</span> Strabismus Detection Using Eye Alignment Stability</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anoop%20T.%20R.">Anoop T. R.</a>, <a href="https://publications.waset.org/abstracts/search?q=Otman%20Basir"> Otman Basir</a>, <a href="https://publications.waset.org/abstracts/search?q=Robert%20F.%20Hess"> Robert F. Hess</a>, <a href="https://publications.waset.org/abstracts/search?q=Ben%20Thompson"> Ben Thompson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. Currently, many children with strabismus remain undiagnosed until school entry because current automated screening methods have limited success in the preschool age range. A method for strabismus detection using eye alignment stability (EAS) is proposed. This method starts with face detection, followed by facial landmark detection, eye region segmentation, eye gaze extraction, and eye alignment stability estimation. Binarization and morphological operations are performed for segmenting the pupil region from the eye. After finding the EAS, its absolute value is used to differentiate the strabismic eye from the non-strabismic eye. If the value of the eye alignment stability is greater than a particular threshold, then the eyes are misaligned, and if its value is less than the threshold, the eyes are aligned. The method was tested on 175 strabismic and non-strabismic images obtained from Kaggle and Google Photos. The strabismic eye is taken as a positive class, and the non-strabismic eye is taken as a negative class. The test produced a true positive rate of 100% and a false positive rate of 7.69%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=strabismus" title="strabismus">strabismus</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20landmarks" title=" facial landmarks"> facial landmarks</a>, <a href="https://publications.waset.org/abstracts/search?q=eye%20segmentation" title=" eye segmentation"> eye segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=eye%20gaze" title=" eye gaze"> eye gaze</a>, <a href="https://publications.waset.org/abstracts/search?q=binarization" title=" binarization"> binarization</a> </p> <a href="https://publications.waset.org/abstracts/177646/strabismus-detection-using-eye-alignment-stability" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177646.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">326</span> Individualized Emotion Recognition Through Dual-Representations and Ground-Established Ground Truth</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Valentina%20Zhang">Valentina Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> While facial expression is a complex and individualized behavior, all facial emotion recognition (FER) systems known to us rely on a single facial representation and are trained on universal data. We conjecture that: (i) different facial representations can provide different, sometimes complementing views of emotions; (ii) when employed collectively in a discussion group setting, they enable more accurate emotion reading which is highly desirable in autism care and other applications context sensitive to errors. In this paper, we first study FER using pixel-based DL vs semantics-based DL in the context of deepfake videos. Our experiment indicates that while the semantics-trained model performs better with articulated facial feature changes, the pixel-trained model outperforms on subtle or rare facial expressions. Armed with these findings, we have constructed an adaptive FER system learning from both types of models for dyadic or small interacting groups and further leveraging the synthesized group emotions as the ground truth for individualized FER training. Using a collection of group conversation videos, we demonstrate that FER accuracy and personalization can benefit from such an approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=neurodivergence%20care" title="neurodivergence care">neurodivergence care</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20emotion%20recognition" title=" facial emotion recognition"> facial emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=ground%20truth%20for%20supervised%20learning" title=" ground truth for supervised learning"> ground truth for supervised learning</a> </p> <a href="https://publications.waset.org/abstracts/144009/individualized-emotion-recognition-through-dual-representations-and-ground-established-ground-truth" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144009.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">325</span> Deep-Learning Based Approach to Facial Emotion Recognition through Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nouha%20Khediri">Nouha Khediri</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Ben%20Ammar"> Mohammed Ben Ammar</a>, <a href="https://publications.waset.org/abstracts/search?q=Monji%20Kherallah"> Monji Kherallah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recently, facial emotion recognition (FER) has become increasingly essential to understand the state of the human mind. Accurately classifying emotion from the face is a challenging task. In this paper, we present a facial emotion recognition approach named CV-FER, benefiting from deep learning, especially CNN and VGG16. First, the data is pre-processed with data cleaning and data rotation. Then, we augment the data and proceed to our FER model, which contains five convolutions layers and five pooling layers. Finally, a softmax classifier is used in the output layer to recognize emotions. Based on the above contents, this paper reviews the works of facial emotion recognition based on deep learning. Experiments show that our model outperforms the other methods using the same FER2013 database and yields a recognition rate of 92%. We also put forward some suggestions for future work. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CNN" title="CNN">CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=deep-learning" title=" deep-learning"> deep-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20emotion%20recognition" title=" facial emotion recognition"> facial emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/150291/deep-learning-based-approach-to-facial-emotion-recognition-through-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150291.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">324</span> Remembering Route in an Unfamiliar Homogenous Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Sameer">Ahmed Sameer</a>, <a href="https://publications.waset.org/abstracts/search?q=Braj%20Bhushan"> Braj Bhushan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of our study was to compare two techniques (no landmark vs imaginary landmark) of remembering route while traversing in an unfamiliar homogenous environment. We used two videos each having nine identical turns with no landmarks. In the first video participant was required to remember the sequence of turns. In the second video participant was required to imagine a landmark at each turn and associate the turn with it. In both the task the participant was asked to recall the sequence of turns as it appeared in the video. Results showed that performance in the first condition i.e. without use of landmarks was better than imaginary landmark condition. The difference, however, became significant when the participant were tested again about 30 minutes later though performance was still better in no-landmark condition. The finding is surprising given the past research in memory and is explained in terms of cognitive factors such as mental workload. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wayfinding" title="wayfinding">wayfinding</a>, <a href="https://publications.waset.org/abstracts/search?q=landmarks" title=" landmarks"> landmarks</a>, <a href="https://publications.waset.org/abstracts/search?q=unfamiliar%20environment" title=" unfamiliar environment"> unfamiliar environment</a>, <a href="https://publications.waset.org/abstracts/search?q=cognitive%20psychology" title=" cognitive psychology"> cognitive psychology</a> </p> <a href="https://publications.waset.org/abstracts/25660/remembering-route-in-an-unfamiliar-homogenous-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25660.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">476</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">323</span> Localization of Mobile Robots with Omnidirectional Cameras</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tatsuya%20Kato">Tatsuya Kato</a>, <a href="https://publications.waset.org/abstracts/search?q=Masanobu%20Nagata"> Masanobu Nagata</a>, <a href="https://publications.waset.org/abstracts/search?q=Hidetoshi%20Nakashima"> Hidetoshi Nakashima</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazunori%20Matsuo"> Kazunori Matsuo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Localization of mobile robots are important tasks for developing autonomous mobile robots. This paper proposes a method to estimate positions of a mobile robot using an omnidirectional camera on the robot. Landmarks for points of references are set up on a field where the robot works. The omnidirectional camera which can obtain 360 [deg] around images takes photographs of these landmarks. The positions of the robots are estimated from directions of these landmarks that are extracted from the images by image processing. This method can obtain the robot positions without accumulative position errors. Accuracy of the estimated robot positions by the proposed method are evaluated through some experiments. The results show that it can obtain the positions with small standard deviations. Therefore the method has possibilities of more accurate localization by tuning of appropriate offset parameters. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mobile%20robots" title="mobile robots">mobile robots</a>, <a href="https://publications.waset.org/abstracts/search?q=localization" title=" localization"> localization</a>, <a href="https://publications.waset.org/abstracts/search?q=omnidirectional%20camera" title=" omnidirectional camera"> omnidirectional camera</a>, <a href="https://publications.waset.org/abstracts/search?q=estimating%20positions" title=" estimating positions"> estimating positions</a> </p> <a href="https://publications.waset.org/abstracts/11803/localization-of-mobile-robots-with-omnidirectional-cameras" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11803.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">442</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">322</span> Noninvasive Evaluation of Acupuncture by Measuring Facial Temperature through Thermal Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=An%20Guo">An Guo</a>, <a href="https://publications.waset.org/abstracts/search?q=Hieyong%20Jeong"> Hieyong Jeong</a>, <a href="https://publications.waset.org/abstracts/search?q=Tianyi%20Wang"> Tianyi Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Na%20Li"> Na Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuko%20Ohno"> Yuko Ohno</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Acupuncture, known as sensory simulation, has been used to treat various disorders for thousands of years. However, present studies had not addressed approaches for noninvasive measurement in order to evaluate therapeutic effect of acupuncture. The purpose of this study is to propose a noninvasive method to evaluate acupuncture by measuring facial temperature through thermal image. Three human subjects were recruited in this study. Each subject received acupuncture therapy for 30 mins. Acupuncture needles (Ø0.16 x 30 mm) were inserted into Baihui point (DU20), Neiguan points (PC6) and Taichong points (LR3), acupuncture needles (Ø0.18 x 39 mm) were inserted into Tanzhong point (RN17), Zusanli points (ST36) and Yinlingquan points (SP9). Facial temperature was recorded by an infrared thermometer. Acupuncture therapeutic effect was compared pre- and post-acupuncture. Experiment results demonstrated that facial temperature changed according to acupuncture therapeutic effect. It was concluded that proposed method showed high potential to evaluate acupuncture by noninvasive measurement of facial temperature. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=acupuncture" title="acupuncture">acupuncture</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20temperature" title=" facial temperature"> facial temperature</a>, <a href="https://publications.waset.org/abstracts/search?q=noninvasive%20evaluation" title=" noninvasive evaluation"> noninvasive evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=thermal%20image" title=" thermal image"> thermal image</a> </p> <a href="https://publications.waset.org/abstracts/95222/noninvasive-evaluation-of-acupuncture-by-measuring-facial-temperature-through-thermal-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95222.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">321</span> Facial Emotion Recognition Using Deep Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashutosh%20Mishra">Ashutosh Mishra</a>, <a href="https://publications.waset.org/abstracts/search?q=Nikhil%20Goyal"> Nikhil Goyal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A 3D facial emotion recognition model based on deep learning is proposed in this paper. Two convolution layers and a pooling layer are employed in the deep learning architecture. After the convolution process, the pooling is finished. The probabilities for various classes of human faces are calculated using the sigmoid activation function. To verify the efficiency of deep learning-based systems, a set of faces. The Kaggle dataset is used to verify the accuracy of a deep learning-based face recognition model. The model's accuracy is about 65 percent, which is lower than that of other facial expression recognition techniques. Despite significant gains in representation precision due to the nonlinearity of profound image representations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title="facial recognition">facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computational%20intelligence" title=" computational intelligence"> computational intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20map" title=" depth map"> depth map</a> </p> <a href="https://publications.waset.org/abstracts/139253/facial-emotion-recognition-using-deep-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139253.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">231</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">320</span> Exploring the Efficacy of Nitroglycerin in Filler-Induced Facial Skin Ischemia: A Narrative ‎Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amir%20Feily">Amir Feily</a>, <a href="https://publications.waset.org/abstracts/search?q=Hazhir%20Shahmoradi%20Akram"> Hazhir Shahmoradi Akram</a>, <a href="https://publications.waset.org/abstracts/search?q=Mojtaba%20Ghaedi"> Mojtaba Ghaedi</a>, <a href="https://publications.waset.org/abstracts/search?q=Farshid%20Javdani"> Farshid Javdani</a>, <a href="https://publications.waset.org/abstracts/search?q=Naser%20Hatami"> Naser Hatami</a>, <a href="https://publications.waset.org/abstracts/search?q=Navid%20Kalani"> Navid Kalani</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Zarenezhad"> Mohammad Zarenezhad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Filler-induced facial skin ischemia is a potential complication of dermal filler injections that can result in tissue damage and necrosis. Nitroglycerin has been suggested as a treatment option due to its vasodilatory effects, but its efficacy in this context is unclear. Methods: A narrative review was conducted to examine the available evidence on the efficacy of nitroglycerin in filler-induced facial skin ischemia. Relevant studies were identified through a search of electronic databases and manual searching of reference lists. Results: The review found limited evidence supporting the efficacy of nitroglycerin in this context. While there were case reports where the combination of nitroglycerin and hyaluronidase was successful in treating filler-induced facial skin ischemia, there was only one case report where nitroglycerin alone was successful. Furthermore, a rat model did not demonstrate any benefits of nitroglycerin and showed harmful results. Conclusion: The evidence regarding the efficacy of nitroglycerin in filler-induced facial skin ischemia is inconclusive and seems to be against its application. Further research is needed to determine the effectiveness of nitroglycerin alone and in combination with other treatments for this condition. Clinicians should consider limited evidence bases when deciding on treatment options for patients with filler-induced facial skin ischemia. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nitroglycerin" title="nitroglycerin">nitroglycerin</a>, <a href="https://publications.waset.org/abstracts/search?q=facial" title=" facial"> facial</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20ischemia" title=" skin ischemia"> skin ischemia</a>, <a href="https://publications.waset.org/abstracts/search?q=fillers" title=" fillers"> fillers</a>, <a href="https://publications.waset.org/abstracts/search?q=efficacy" title=" efficacy"> efficacy</a>, <a href="https://publications.waset.org/abstracts/search?q=narrative%20review" title=" narrative review"> narrative review</a> </p> <a href="https://publications.waset.org/abstracts/171621/exploring-the-efficacy-of-nitroglycerin-in-filler-induced-facial-skin-ischemia-a-narrative-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171621.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">92</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">319</span> Highly Realistic Facial Expressions of Anthropomorphic Social Agent as a Factor in Solving the &#039;Uncanny Valley&#039; Problem</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daniia%20Nigmatullina">Daniia Nigmatullina</a>, <a href="https://publications.waset.org/abstracts/search?q=Vlada%20Kugurakova"> Vlada Kugurakova</a>, <a href="https://publications.waset.org/abstracts/search?q=Maxim%20Talanov"> Maxim Talanov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present a methodology and our plans of anthropomorphic social agent visualization. That includes creation of three-dimensional model of the virtual companion's head and its facial expressions. Talking Head is a cross-disciplinary project of developing of the human-machine interface with cognitive functions. During the creation of a realistic humanoid robot or a character, there might be the ‘uncanny valley’ problem. We think about this phenomenon and its possible causes. We are going to overcome the ‘uncanny valley’ by increasing of realism. This article discusses issues that should be considered when creating highly realistic characters (particularly the head), their facial expressions and speech visualization. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=anthropomorphic%20social%20agent" title="anthropomorphic social agent">anthropomorphic social agent</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20animation" title=" facial animation"> facial animation</a>, <a href="https://publications.waset.org/abstracts/search?q=uncanny%20valley" title=" uncanny valley"> uncanny valley</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20modeling" title=" 3D modeling"> 3D modeling</a> </p> <a href="https://publications.waset.org/abstracts/41558/highly-realistic-facial-expressions-of-anthropomorphic-social-agent-as-a-factor-in-solving-the-uncanny-valley-problem" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41558.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">290</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">318</span> Emotion Recognition in Video and Images in the Wild</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Faizan%20Tariq">Faizan Tariq</a>, <a href="https://publications.waset.org/abstracts/search?q=Moayid%20Ali%20Zaidi"> Moayid Ali Zaidi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial emotion recognition algorithms are expanding rapidly now a day. People are using different algorithms with different combinations to generate best results. There are six basic emotions which are being studied in this area. Author tried to recognize the facial expressions using object detector algorithms instead of traditional algorithms. Two object detection algorithms were chosen which are Faster R-CNN and YOLO. For pre-processing we used image rotation and batch normalization. The dataset I have chosen for the experiments is Static Facial Expression in Wild (SFEW). Our approach worked well but there is still a lot of room to improve it, which will be a future direction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title="face recognition">face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title=" emotion recognition"> emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a> </p> <a href="https://publications.waset.org/abstracts/152635/emotion-recognition-in-video-and-images-in-the-wild" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152635.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20landmarks&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20landmarks&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20landmarks&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20landmarks&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20landmarks&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20landmarks&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20landmarks&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20landmarks&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20landmarks&amp;page=10">10</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20landmarks&amp;page=11">11</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20landmarks&amp;page=12">12</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20landmarks&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10