CINXE.COM

Search results for: facial swelling

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: facial swelling</title> <meta name="description" content="Search results for: facial swelling"> <meta name="keywords" content="facial swelling"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="facial swelling" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="facial swelling"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 587</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: facial swelling</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">557</span> Emotion Recognition in Video and Images in the Wild</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Faizan%20Tariq">Faizan Tariq</a>, <a href="https://publications.waset.org/abstracts/search?q=Moayid%20Ali%20Zaidi"> Moayid Ali Zaidi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial emotion recognition algorithms are expanding rapidly now a day. People are using different algorithms with different combinations to generate best results. There are six basic emotions which are being studied in this area. Author tried to recognize the facial expressions using object detector algorithms instead of traditional algorithms. Two object detection algorithms were chosen which are Faster R-CNN and YOLO. For pre-processing we used image rotation and batch normalization. The dataset I have chosen for the experiments is Static Facial Expression in Wild (SFEW). Our approach worked well but there is still a lot of room to improve it, which will be a future direction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title="face recognition">face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title=" emotion recognition"> emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a> </p> <a href="https://publications.waset.org/abstracts/152635/emotion-recognition-in-video-and-images-in-the-wild" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152635.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">556</span> Curvelet Features with Mouth and Face Edge Ratios for Facial Expression Identification </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Kherchaoui">S. Kherchaoui</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Houacine"> A. Houacine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a facial expression recognition system. It performs identification and classification of the seven basic expressions; happy, surprise, fear, disgust, sadness, anger, and neutral states. It consists of three main parts. The first one is the detection of a face and the corresponding facial features to extract the most expressive portion of the face, followed by a normalization of the region of interest. Then calculus of curvelet coefficients is performed with dimensionality reduction through principal component analysis. The resulting coefficients are combined with two ratios; mouth ratio and face edge ratio to constitute the whole feature vector. The third step is the classification of the emotional state using the SVM method in the feature space. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification" title="facial expression identification">facial expression identification</a>, <a href="https://publications.waset.org/abstracts/search?q=curvelet%20coefficient" title=" curvelet coefficient"> curvelet coefficient</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine%20%28SVM%29" title=" support vector machine (SVM)"> support vector machine (SVM)</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition%20system" title=" recognition system"> recognition system</a> </p> <a href="https://publications.waset.org/abstracts/10311/curvelet-features-with-mouth-and-face-edge-ratios-for-facial-expression-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10311.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">555</span> Three-Dimensional Measurement and Analysis of Facial Nerve Recess</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kang%20Shuo-Shuo">Kang Shuo-Shuo</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Jian-Nan"> Li Jian-Nan</a>, <a href="https://publications.waset.org/abstracts/search?q=Yang%20Shiming"> Yang Shiming</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: The three-dimensional anatomical structure of the facial nerve recess and its relationship were measured by high-resolution temporal bone CT to provide imaging reference for cochlear implant operation. Materials and Methods: By analyzing the high-resolution CT of 160 cases (320 pleural ears) of the temporal bone, the following parameters were measured at the axial window niche level: 1. The distance between the facial nerve and chordae tympani nerve d1; 2. Distance between the facial nerve and circular window niche d2; 3. The relative Angle between the facial nerve and the circular window niche a; 4. Distance between the middle point of the face recess and the circular window niche d3; 5. The relative angle between the middle point of the face recess and the circular window niche b. Factors that might influence the anatomy of the facial recess were recorded, including the patient's sex, age, and anatomical variation (e.g., vestibular duct dilation, mastoid gas type, mothoid sinus advancement, jugular bulbar elevation, etc.), and the correlation between these factors and the measured facial recess parameters was analyzed. Result: The mean value of face-drum distance d1 is (3.92 ± 0.26) mm, the mean value of face-niche distance d2 is (5.95 ± 0.62) mm, the mean value of face-niche Angle a is (94.61 ± 9.04) °, and the mean value of fossa - niche distance d3 is (6.46 ± 0.63) mm. The average fossa-niche Angle b was (113.47 ± 7.83) °. Gender, age, and anterior sigmoid sinus were the three factors affecting the width of the opposite recess d1, the Angle of the opposite nerve relative to the circular window niche a, and the Angle of the facial recess relative to the circular window niche b. Conclusion: High-resolution temporal bone CT before cochlear implantation can show the important anatomical relationship of the facial nerve recess, and the measurement results have clinical reference value for the operation of cochlear implantation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cochlear%20implantation" title="cochlear implantation">cochlear implantation</a>, <a href="https://publications.waset.org/abstracts/search?q=recess%20of%20facial%20nerve" title=" recess of facial nerve"> recess of facial nerve</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20bone%20CT" title=" temporal bone CT"> temporal bone CT</a>, <a href="https://publications.waset.org/abstracts/search?q=three-dimensional%20measurement" title=" three-dimensional measurement"> three-dimensional measurement</a> </p> <a href="https://publications.waset.org/abstracts/192591/three-dimensional-measurement-and-analysis-of-facial-nerve-recess" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/192591.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">16</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">554</span> Peripheral Facial Nerve Palsy after Lip Augmentation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sana%20Ilyas">Sana Ilyas</a>, <a href="https://publications.waset.org/abstracts/search?q=Kishalaya%20Mukherjee"> Kishalaya Mukherjee</a>, <a href="https://publications.waset.org/abstracts/search?q=Suresh%20Shetty"> Suresh Shetty</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Lip Augmentation has become more common in recent years. Patients do not expect to experience facial palsy after having lip augmentation. This poster will present the findings of such a presentation and will discuss the possible pathophysiology and management. (This poster has been published as a paper in the dental update, June 2022) Aim: The aim of the study was to explore the link between facial nerve palsy and lip fillers, to explore the literature surrounding facial nerve palsy, and to discuss the case of a patient who presented with facial nerve palsy with seemingly unknown cause. Methodology: There was a thorough assessment of the current literature surrounding the topic. This included published papers in journals through PubMed database searches and printed books on the topic. A case presentation was discussed in detail of a patient presenting with peripheral facial nerve palsy and associating it with lip augmentation that she had a day prior. Results and Conclusion: Even though the pathophysiology may not be clear for this presentation, it is important to highlight uncommon presentations or complications that may occur after treatment. This can help with understanding and managing similar cases, should they arise.It is also important to differentiate cause and association in order to make an accurate diagnosis. This may be difficult if there is little scientific literature. Therefore, further research can help to improve the understanding of the pathophysiology of similar presentations. This poster has been published as a paper in dental update, June 2022, and therefore shares a similar conclusiom. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20palsy" title="facial palsy">facial palsy</a>, <a href="https://publications.waset.org/abstracts/search?q=lip%20augmentation" title=" lip augmentation"> lip augmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=causation%20and%20correlation" title=" causation and correlation"> causation and correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=dental%20cosmetics" title=" dental cosmetics"> dental cosmetics</a> </p> <a href="https://publications.waset.org/abstracts/158439/peripheral-facial-nerve-palsy-after-lip-augmentation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158439.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">148</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">553</span> DBN-Based Face Recognition System Using Light Field</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bing%20Gu">Bing Gu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Abstract—Most of Conventional facial recognition systems are based on image features, such as LBP, SIFT. Recently some DBN-based 2D facial recognition systems have been proposed. However, we find there are few DBN-based 3D facial recognition system and relative researches. 3D facial images include all the individual biometric information. We can use these information to build more accurate features, So we present our DBN-based face recognition system using Light Field. We can see Light Field as another presentation of 3D image, and Light Field Camera show us a way to receive a Light Field. We use the commercially available Light Field Camera to act as the collector of our face recognition system, and the system receive a state-of-art performance as convenient as conventional 2D face recognition system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=DBN" title="DBN">DBN</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=light%20field" title=" light field"> light field</a>, <a href="https://publications.waset.org/abstracts/search?q=Lytro" title=" Lytro"> Lytro</a> </p> <a href="https://publications.waset.org/abstracts/10821/dbn-based-face-recognition-system-using-light-field" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10821.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">464</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">552</span> Tick Induced Facial Nerve Paresis: A Narrative Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jemma%20Porrett">Jemma Porrett</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: We present a literature review examining the research surrounding tick paralysis resulting in facial nerve palsy. A case of an intra-aural paralysis tick bite resulting in unilateral facial nerve palsy is also discussed. Methods: A novel case of otoacariasis with associated ipsilateral facial nerve involvement is presented. Additionally, we conducted a review of the literature, and we searched the MEDLINE and EMBASE databases for relevant literature published between 1915 and 2020. Utilising the following keywords; 'Ixodes', 'Facial paralysis', 'Tick bite', and 'Australia', 18 articles were deemed relevant to this study. Results: Eighteen articles included in the review comprised a total of 48 patients. Patients' ages ranged from one year to 84 years of age. Ten studies estimated the possible duration between a tick bite and facial nerve palsy, averaging 8.9 days. Forty-one patients presented with a single tick within the external auditory canal, three had a single tick located on the temple or forehead region, three had post-auricular ticks, and one patient had a remarkable 44 ticks removed from the face, scalp, neck, back, and limbs. A complete ipsilateral facial nerve palsy was present in 45 patients, notably, in 16 patients, this occurred following tick removal. House-Brackmann classification was utilised in 7 patients; four patients with grade 4, one patient with grade three, and two patients with grade 2 facial nerve palsy. Thirty-eight patients had complete recovery of facial palsy. Thirteen studies were analysed for time to recovery, with an average time of 19 days. Six patients had partial recovery at the time of follow-up. One article reported improvement in facial nerve palsy at 24 hours, but no further follow-up was reported. One patient was lost to follow up, and one article failed to mention any resolution of facial nerve palsy. One patient died from respiratory arrest following generalized paralysis. Conclusions: Tick paralysis is a severe but preventable disease. Careful examination of the face, scalp, and external auditory canal should be conducted in patients presenting with otalgia and facial nerve palsy, particularly in tropical areas, to exclude the possibility of tick infestation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20nerve%20palsy" title="facial nerve palsy">facial nerve palsy</a>, <a href="https://publications.waset.org/abstracts/search?q=tick%20bite" title=" tick bite"> tick bite</a>, <a href="https://publications.waset.org/abstracts/search?q=intra-aural" title=" intra-aural"> intra-aural</a>, <a href="https://publications.waset.org/abstracts/search?q=Australia" title=" Australia"> Australia</a> </p> <a href="https://publications.waset.org/abstracts/133035/tick-induced-facial-nerve-paresis-a-narrative-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/133035.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">113</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">551</span> Improving Swelling Performance Using Industrial Waste Products</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohieldin%20Elmashad">Mohieldin Elmashad</a>, <a href="https://publications.waset.org/abstracts/search?q=Salwa%20Yassin"> Salwa Yassin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Expansive soils regarded as one of the most problematic unsaturated formations in the Egyptian arid zones and present a great challenge in civil engineering, in general, and geotechnical engineering, in particular. Severe geotechnical complications and consequent structural damages have been arising due to an excessive and differential volumetric change upon wetting and change in water content. Different studies have been carried out concerning the swelling performance of the expansive soils using different additives including phospho-gypsum as an industrial waste product. However, this paper describes the results of a comprehensive testing programme that was carried out to investigate the effect of phospho-gypsum (PG) and sodium chloride (NaCl), as an additive mixture, on the swelling performance of constituent samples of swelling soils. The constituent samples comprise commercial bentonite collected from a natural site, mixed with different percentages of PG-NaCl mixture. The testing programme had been scoped to cover the physical and chemical properties of the constituent samples. In addition, a mineralogical study using x-ray diffraction (XRD) was performed on the collected bentonite and the mixed bentonite with PG-NaCl mixture samples. The obtained results of this study showed significant improvement in the swelling performance of the tested samples with the increase of the proposed PG-NaCl mixture content. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=expansive%20soils" title="expansive soils">expansive soils</a>, <a href="https://publications.waset.org/abstracts/search?q=industrial%20waste" title=" industrial waste"> industrial waste</a>, <a href="https://publications.waset.org/abstracts/search?q=mineralogical%20study" title=" mineralogical study"> mineralogical study</a>, <a href="https://publications.waset.org/abstracts/search?q=swelling%20performance" title=" swelling performance"> swelling performance</a>, <a href="https://publications.waset.org/abstracts/search?q=X-ray%20diffraction" title=" X-ray diffraction"> X-ray diffraction</a> </p> <a href="https://publications.waset.org/abstracts/60475/improving-swelling-performance-using-industrial-waste-products" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/60475.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">270</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">550</span> The Effects of Affective Dimension of Face on Facial Attractiveness</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kyung-Ja%20Cho">Kyung-Ja Cho</a>, <a href="https://publications.waset.org/abstracts/search?q=Sun%20Jin%20Park"> Sun Jin Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study examined what effective dimension affects facial attractiveness. Two orthogonal dimensions, sharp-soft and babyish-mature, were used to rate the levels of facial attractiveness in 20’s women. This research also investigated the sex difference on the effect of effective dimension of face on attractiveness. The test subjects composed of 15 males and 18 females. They looked 330 photos of women in 20s. Then they rated the levels of the effective dimensions of faces with sharp-soft and babyish-mature, and the attraction with charmless-charming. The respond forms were Likert scales, the answer was scored from 1 to 9. As a result of multiple regression analysis, the subject reported the milder and younger appearance as more attractive. Both male and female subjects showed the same evaluation. This result means that two effective dimensions have the effect on estimating attractiveness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=affective%20dimension%20of%20faces" title="affective dimension of faces">affective dimension of faces</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20attractiveness" title=" facial attractiveness"> facial attractiveness</a>, <a href="https://publications.waset.org/abstracts/search?q=sharp-soft" title=" sharp-soft"> sharp-soft</a>, <a href="https://publications.waset.org/abstracts/search?q=babyish-mature" title=" babyish-mature"> babyish-mature</a> </p> <a href="https://publications.waset.org/abstracts/5442/the-effects-of-affective-dimension-of-face-on-facial-attractiveness" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5442.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">336</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">549</span> Facial Behavior Modifications Following the Diffusion of the Use of Protective Masks Due to COVID-19</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andreas%20Aceranti">Andreas Aceranti</a>, <a href="https://publications.waset.org/abstracts/search?q=Simonetta%20Vernocchi"> Simonetta Vernocchi</a>, <a href="https://publications.waset.org/abstracts/search?q=Marco%20Colorato"> Marco Colorato</a>, <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Zaccariello"> Daniel Zaccariello</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Our study explores the usefulness of implementing facial expression recognition capabilities and using the Facial Action Coding System (FACS) in contexts where the other person is wearing a mask. In the communication process, the subjects use a plurality of distinct and autonomous reporting systems. Among them, the system of mimicking facial movements is worthy of attention. Basic emotion theorists have identified the existence of specific and universal patterns of facial expressions related to seven basic emotions -anger, disgust, contempt, fear, sadness, surprise, and happiness- that would distinguish one emotion from another. However, due to the COVID-19 pandemic, we have come up against the problem of having the lower half of the face covered and, therefore, not investigable due to the masks. Facial-emotional behavior is a good starting point for understanding: (1) the affective state (such as emotions), (2) cognitive activity (perplexity, concentration, boredom), (3) temperament and personality traits (hostility, sociability, shyness), (4) psychopathology (such as diagnostic information relevant to depression, mania, schizophrenia, and less severe disorders), (5) psychopathological processes that occur during social interactions patient and analyst. There are numerous methods to measure facial movements resulting from the action of muscles, see for example, the measurement of visible facial actions using coding systems (non-intrusive systems that require the presence of an observer who encodes and categorizes behaviors) and the measurement of electrical "discharges" of contracting muscles (facial electromyography; EMG). However, the measuring system invented by Ekman and Friesen (2002) - "Facial Action Coding System - FACS" is the most comprehensive, complete, and versatile. Our study, carried out on about 1,500 subjects over three years of work, allowed us to highlight how the movements of the hands and upper part of the face change depending on whether the subject wears a mask or not. We have been able to identify specific alterations to the subjects’ hand movement patterns and their upper face expressions while wearing masks compared to when not wearing them. We believe that finding correlations between how body language changes when our facial expressions are impaired can provide a better understanding of the link between the face and body non-verbal language. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20action%20coding%20system" title="facial action coding system">facial action coding system</a>, <a href="https://publications.waset.org/abstracts/search?q=COVID-19" title=" COVID-19"> COVID-19</a>, <a href="https://publications.waset.org/abstracts/search?q=masks" title=" masks"> masks</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20analysis" title=" facial analysis"> facial analysis</a> </p> <a href="https://publications.waset.org/abstracts/160896/facial-behavior-modifications-following-the-diffusion-of-the-use-of-protective-masks-due-to-covid-19" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160896.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">77</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">548</span> Dynamic Gabor Filter Facial Features-Based Recognition of Emotion in Video Sequences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=T.%20Hari%20Prasath">T. Hari Prasath</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Ithaya%20Rani"> P. Ithaya Rani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the world of visual technology, recognizing emotions from the face images is a challenging task. Several related methods have not utilized the dynamic facial features effectively for high performance. This paper proposes a method for emotions recognition using dynamic facial features with high performance. Initially, local features are captured by Gabor filter with different scale and orientations in each frame for finding the position and scale of face part from different backgrounds. The Gabor features are sent to the ensemble classifier for detecting Gabor facial features. The region of dynamic features is captured from the Gabor facial features in the consecutive frames which represent the dynamic variations of facial appearances. In each region of dynamic features is normalized using Z-score normalization method which is further encoded into binary pattern features with the help of threshold values. The binary features are passed to Multi-class AdaBoost classifier algorithm with the well-trained database contain happiness, sadness, surprise, fear, anger, disgust, and neutral expressions to classify the discriminative dynamic features for emotions recognition. The developed method is deployed on the Ryerson Multimedia Research Lab and Cohn-Kanade databases and they show significant performance improvement owing to their dynamic features when compared with the existing methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=detecting%20face" title="detecting face">detecting face</a>, <a href="https://publications.waset.org/abstracts/search?q=Gabor%20filter" title=" Gabor filter"> Gabor filter</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-class%20AdaBoost%20classifier" title=" multi-class AdaBoost classifier"> multi-class AdaBoost classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=Z-score%20normalization" title=" Z-score normalization"> Z-score normalization</a> </p> <a href="https://publications.waset.org/abstracts/85005/dynamic-gabor-filter-facial-features-based-recognition-of-emotion-in-video-sequences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85005.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">278</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">547</span> Improving the Dimensional Stability of Bamboo Woven Strand Board</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gulelat%20Gatew">Gulelat Gatew</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Bamboo Woven Strand Board (WSB) products are manufactured from Ethiopia highland bamboo (Yushania alpina) as a multiple layer mat structure for enhanced mechanical performance. Hence, it shows similar mechanical properties as tropical hardwood products. WSB, therefore, constitutes a sustainable alternative to tropical hardwood products. The resin and wax ratio had a great influence on the determinants properties of the product quality such as internal bonding, water absorption, thickness swelling, bending and stiffness properties. Among these properties, because of the hygroscopic nature of the bamboo, thickness swelling and water absorption are important performances of WSB for using in construction and outdoor facilities. When WSB is exposed to water or moist environment, they tend to swell and absorb water in all directions. The degree of swelling and water absorption depends on the type of resin used, resin formulation, resin ratio, wax type and ratio. The objective of this research is investigating effects of phenol formaldehyde and wax on thickness swelling and water absorption behavior on bamboo WSB for construction and outdoor facilities. The experiments were conducted to measure the effects of wax and phenol-formaldehyde resin content on WSB thickness swelling and water absorption which leads to investigate its effect on dimension stability and mechanical properties. Both experiments were performed with 2–hour and 24-hour water immersion test and a significant set of data regarding the influence of such method parameters is also presented. The addition of up to 2% wax with 10% of phenol formaldehyde significantly reduced thickness swelling and water absorption of WSB which resulted in making it more hydrophobic and less susceptible to the influences of moisture in high humidity conditions compared to the panels without wax. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=woven%20strand%20board%20%28WSB%29" title="woven strand board (WSB)">woven strand board (WSB)</a>, <a href="https://publications.waset.org/abstracts/search?q=water%20absorption" title=" water absorption"> water absorption</a>, <a href="https://publications.waset.org/abstracts/search?q=thickness%20swelling" title=" thickness swelling"> thickness swelling</a>, <a href="https://publications.waset.org/abstracts/search?q=phenol%20formaldehyde%20resin" title=" phenol formaldehyde resin"> phenol formaldehyde resin</a> </p> <a href="https://publications.waset.org/abstracts/54164/improving-the-dimensional-stability-of-bamboo-woven-strand-board" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54164.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">211</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">546</span> In vivo Mechanical Characterization of Facial Skin Combining Digital Image Correlation and Finite Element</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Huixin%20Wei">Huixin Wei</a>, <a href="https://publications.waset.org/abstracts/search?q=Shibin%20Wang"> Shibin Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Linan%20Li"> Linan Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Lei%20Zhou"> Lei Zhou</a>, <a href="https://publications.waset.org/abstracts/search?q=Xinhao%20Tu"> Xinhao Tu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial skin is a biomedical material with complex mechanical properties of anisotropy, viscoelasticity, and hyperelasticity. The mechanical properties of facial skin are crucial for a number of applications including facial plastic surgery, animation, dermatology, cosmetic industry, and impact biomechanics. Skin is a complex multi-layered material which can be broadly divided into three main layers, the epidermis, the dermis, and the hypodermis. Collagen fibers account for 75% of the dry weight of dermal tissue, and it is these fibers which are responsible for the mechanical properties of skin. Many research on the anisotropic mechanical properties are mainly concentrated on in vitro, but there is a great difference between in vivo and in vitro for mechanical properties of the skin. In this study, we presented a method to measure the mechanical properties of facial skin in vivo. Digital image correlation (DIC) and indentation tests were used to obtain the experiment data, including the deformation of facial surface and indentation force-displacement curve. Then, the experiment was simulated using a finite element (FE) model. Application of Computed Tomography (CT) and reconstruction techniques obtained the real tissue geometry. A three-dimensional FE model of facial skin, including a bi-layer system, was obtained. As the epidermis is relatively thin, the epidermis and dermis were regarded as one layer and below it was hypodermis in this study. The upper layer was modeled as a Gasser-Ogden-Holzapfel (GOH) model to describe hyperelastic and anisotropic behaviors of the dermis. The under layer was modeled as a linear elastic model. In conclusion, the material properties of two-layer were determined by minimizing the error between the FE data and experimental data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20skin" title="facial skin">facial skin</a>, <a href="https://publications.waset.org/abstracts/search?q=indentation%20test" title=" indentation test"> indentation test</a>, <a href="https://publications.waset.org/abstracts/search?q=finite%20element" title=" finite element"> finite element</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20image%20correlation" title=" digital image correlation"> digital image correlation</a>, <a href="https://publications.waset.org/abstracts/search?q=computed%20tomography" title=" computed tomography"> computed tomography</a> </p> <a href="https://publications.waset.org/abstracts/104687/in-vivo-mechanical-characterization-of-facial-skin-combining-digital-image-correlation-and-finite-element" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/104687.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">112</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">545</span> Analysis and Detection of Facial Expressions in Autism Spectrum Disorder People Using Machine Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Maisam%20Abbas">Muhammad Maisam Abbas</a>, <a href="https://publications.waset.org/abstracts/search?q=Salman%20Tariq"> Salman Tariq</a>, <a href="https://publications.waset.org/abstracts/search?q=Usama%20Riaz"> Usama Riaz</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Tanveer"> Muhammad Tanveer</a>, <a href="https://publications.waset.org/abstracts/search?q=Humaira%20Abdul%20Ghafoor"> Humaira Abdul Ghafoor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Autism Spectrum Disorder (ASD) refers to a developmental disorder that impairs an individual's communication and interaction ability. Individuals feel difficult to read facial expressions while communicating or interacting. Facial Expression Recognition (FER) is a unique method of classifying basic human expressions, i.e., happiness, fear, surprise, sadness, disgust, neutral, and anger through static and dynamic sources. This paper conducts a comprehensive comparison and proposed optimal method for a continued research project—a system that can assist people who have Autism Spectrum Disorder (ASD) in recognizing facial expressions. Comparison has been conducted on three supervised learning algorithms EigenFace, FisherFace, and LBPH. The JAFFE, CK+, and TFEID (I&II) datasets have been used to train and test the algorithms. The results were then evaluated based on variance, standard deviation, and accuracy. The experiments showed that FisherFace has the highest accuracy for all datasets and is considered the best algorithm to be implemented in our system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autism%20spectrum%20disorder" title="autism spectrum disorder">autism spectrum disorder</a>, <a href="https://publications.waset.org/abstracts/search?q=ASD" title=" ASD"> ASD</a>, <a href="https://publications.waset.org/abstracts/search?q=EigenFace" title=" EigenFace"> EigenFace</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20recognition" title=" facial expression recognition"> facial expression recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=FisherFace" title=" FisherFace"> FisherFace</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20binary%20pattern%20histogram" title=" local binary pattern histogram"> local binary pattern histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=LBPH" title=" LBPH"> LBPH</a> </p> <a href="https://publications.waset.org/abstracts/129718/analysis-and-detection-of-facial-expressions-in-autism-spectrum-disorder-people-using-machine-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129718.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">174</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">544</span> Data Collection Techniques for Robotics to Identify the Facial Expressions of Traumatic Brain Injured Patients</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chaudhary%20Muhammad%20Aqdus%20Ilyas">Chaudhary Muhammad Aqdus Ilyas</a>, <a href="https://publications.waset.org/abstracts/search?q=Matthias%20Rehm"> Matthias Rehm</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamal%20Nasrollahi"> Kamal Nasrollahi</a>, <a href="https://publications.waset.org/abstracts/search?q=Thomas%20B.%20Moeslund"> Thomas B. Moeslund</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the investigation of data collection procedures, associated with robots when placed with traumatic brain injured (TBI) patients for rehabilitation purposes through facial expression and mood analysis. Rehabilitation after TBI is very crucial due to nature of injury and variation in recovery time. It is advantageous to analyze these emotional signals in a contactless manner, due to the non-supportive behavior of patients, limited muscle movements and increase in negative emotional expressions. This work aims at the development of framework where robots can recognize TBI emotions through facial expressions to perform rehabilitation tasks by physical, cognitive or interactive activities. The result of these studies shows that with customized data collection strategies, proposed framework identify facial and emotional expressions more accurately that can be utilized in enhancing recovery treatment and social interaction in robotic context. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network-%20long%20short%20term%20memory%20network%20%28CNN-LSTM%29" title=" convolution neural network- long short term memory network (CNN-LSTM)"> convolution neural network- long short term memory network (CNN-LSTM)</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20and%20mood%20recognition" title=" facial expression and mood recognition"> facial expression and mood recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20%28RGB-thermal%29%20analysis" title=" multimodal (RGB-thermal) analysis"> multimodal (RGB-thermal) analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=rehabilitation" title=" rehabilitation"> rehabilitation</a>, <a href="https://publications.waset.org/abstracts/search?q=robots" title=" robots"> robots</a>, <a href="https://publications.waset.org/abstracts/search?q=traumatic%20brain%20injured%20patients" title=" traumatic brain injured patients"> traumatic brain injured patients</a> </p> <a href="https://publications.waset.org/abstracts/98560/data-collection-techniques-for-robotics-to-identify-the-facial-expressions-of-traumatic-brain-injured-patients" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98560.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">543</span> Characterization of a LiFeOP₄ Battery Cell with Mechanical Responses</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ki-Yong%20Oh">Ki-Yong Oh</a>, <a href="https://publications.waset.org/abstracts/search?q=Eunji%20Kwak"> Eunji Kwak</a>, <a href="https://publications.waset.org/abstracts/search?q=Due%20Su%20Son"> Due Su Son</a>, <a href="https://publications.waset.org/abstracts/search?q=Siheon%20Jung"> Siheon Jung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A pouch type of 10 Ah LiFePO₄ battery cell is characterized with two mechanical responses: swelling and bulk force. Both responses vary upon the state of charge significantly, whereas voltage shows flat responses, suggesting that mechanical responses can become a sensitive gauge to characterize microstructure transformation of a battery cell. The derivative of swelling s with respect to capacity Q, (ds/dQ) and the derivative of force F with respect to capacity Q, (dF/dQ) more clearly identify phase transitions of cathode and anode electrodes in the overall charge process than the derivative of voltage V with respect to capacity Q, (dV/dQ). Especially, the force versus swelling curves over the state of charge clearly elucidates three different stiffness over the state of charge oriented from phase transitions: the α-phase, the β-phase, and the metastable solid-solution phase. The observation from mechanical responses suggests that macro-scale mechanical responses of a battery cell are directly correlated to microscopic transformation of a battery cell. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=force%20response" title="force response">force response</a>, <a href="https://publications.waset.org/abstracts/search?q=LiFePO%E2%82%84%20battery" title=" LiFePO₄ battery"> LiFePO₄ battery</a>, <a href="https://publications.waset.org/abstracts/search?q=strain%20response" title=" strain response"> strain response</a>, <a href="https://publications.waset.org/abstracts/search?q=stress%20response" title=" stress response"> stress response</a>, <a href="https://publications.waset.org/abstracts/search?q=swelling%20response" title=" swelling response"> swelling response</a> </p> <a href="https://publications.waset.org/abstracts/97098/characterization-of-a-lifeop4-battery-cell-with-mechanical-responses" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/97098.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">170</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">542</span> Assessment of Hygroscopic Characteristics of Hevea brasiliensis Wood</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=John%20Tosin%20Aladejana">John Tosin Aladejana</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Wood behave differently under different environmental conditions. The knowledge of the hygroscopic nature of wood becomes a key factor in selecting wood for use and required treatment. This study assessed the hygroscopic behaviour of Hevea brasiliensis (Rubber) wood. Void volume, volumetric swelling in the tangential, radial and longitudinal directions and volumetric shrinkage were used to assess the response of the wood when loosing or taking up moisture. Hevea brasiliensis wood samples cut into 20 × 20 × 60 mm taken longitudinally and transversely were used for the study and dried in the oven at 103 ± 2⁰C. The mean values for moisture content in green Hevea brasiliensis wood were 49.74 %, 51.14 % and 54.36 % for top, middle and bottom portion respectively while 51.77 %, 50.02 % and 53.45 % were recorded for outer, middle and inner portions respectively for the tree. The values obtained for volumetric shrinkage and swelling indicated that shrinkage and swelling were higher at the top part of H. brasiliensis. It was also observed that the longitudinal shrinkage was negligible while tangential direction showed the highest shrinkage among the wood direction. The values of the void volume obtained were 43.0 %, 39.0 % and 38.0 % at the top, middle and bottom respectively. The result obtained showed clarification on the wood density of hevea brasiliensis based on the position and portion of the wood species and the variation in moisture content, void volume, volumetric shrinkage and swelling were also revealed. This will provide information in the process of drying hevea brasiliensis wood to ensure better wood quality devoid of defects. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=moisture%20content" title="moisture content">moisture content</a>, <a href="https://publications.waset.org/abstracts/search?q=shrinkage" title=" shrinkage"> shrinkage</a>, <a href="https://publications.waset.org/abstracts/search?q=swelling" title=" swelling"> swelling</a>, <a href="https://publications.waset.org/abstracts/search?q=void%20volume" title=" void volume"> void volume</a> </p> <a href="https://publications.waset.org/abstracts/78996/assessment-of-hygroscopic-characteristics-of-hevea-brasiliensis-wood" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78996.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">274</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">541</span> Facial Expression Recognition Using Sparse Gaussian Conditional Random Field</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammadamin%20Abbasnejad">Mohammadamin Abbasnejad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The analysis of expression and facial Action Units (AUs) detection are very important tasks in fields of computer vision and Human Computer Interaction (HCI) due to the wide range of applications in human life. Many works have been done during the past few years which has their own advantages and disadvantages. In this work, we present a new model based on Gaussian Conditional Random Field. We solve our objective problem using ADMM and we show how well the proposed model works. We train and test our work on two facial expression datasets, CK+, and RU-FACS. Experimental evaluation shows that our proposed approach outperform state of the art expression recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gaussian%20Conditional%20Random%20Field" title="Gaussian Conditional Random Field">Gaussian Conditional Random Field</a>, <a href="https://publications.waset.org/abstracts/search?q=ADMM" title=" ADMM"> ADMM</a>, <a href="https://publications.waset.org/abstracts/search?q=convergence" title=" convergence"> convergence</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20descent" title=" gradient descent"> gradient descent</a> </p> <a href="https://publications.waset.org/abstracts/26245/facial-expression-recognition-using-sparse-gaussian-conditional-random-field" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26245.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">356</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">540</span> When and Why Unhappy People Avoid Enjoyable Experiences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hao%20Shen">Hao Shen</a>, <a href="https://publications.waset.org/abstracts/search?q=Aparna%20Labroo"> Aparna Labroo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Across four studies, we show people in a negative mood avoid anticipated enjoyable experiences because of the subjective difficulty in simulating those experiences, and they misattribute these feelings of difficulty to reduced pleasantness of the anticipated experience. We observe the avoidance of enjoyable experiences only for anticipated experiences that involve smile-like facial-muscular simulation. When the need for facial-muscular simulation is attenuated, or when the anticipated experience relies on facial-muscular simulation to a lesser extent, people in a negative mood no longer avoid enjoyable experiences, but rather seek such experiences because they fit better with their ongoing mood-repair goals. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotion%20regulation" title="emotion regulation">emotion regulation</a>, <a href="https://publications.waset.org/abstracts/search?q=mood%20repair" title=" mood repair"> mood repair</a>, <a href="https://publications.waset.org/abstracts/search?q=embodiment" title=" embodiment"> embodiment</a>, <a href="https://publications.waset.org/abstracts/search?q=anticipated%20experiences" title=" anticipated experiences"> anticipated experiences</a> </p> <a href="https://publications.waset.org/abstracts/3212/when-and-why-unhappy-people-avoid-enjoyable-experiences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3212.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">429</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">539</span> Synthesis of Crosslinked Konjac Glucomannan and Kappa Carrageenan Film with Glutaraldehyde</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sperisa%20Distantina">Sperisa Distantina</a>, <a href="https://publications.waset.org/abstracts/search?q=Fadilah"> Fadilah</a>, <a href="https://publications.waset.org/abstracts/search?q=Mujtahid%20Kaavessina"> Mujtahid Kaavessina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Crosslinked konjac glucomannan and kappa carrageenan film were prepared by chemical crosslinking using glutaraldehyde (GA) as the crosslinking agent. The effect crosslinking on the swelling degree was investigated. Konjac glucomanan and its mixture with kappa carragenan film was immersed in GA solution and then thermally cured. The obtained crosslinked film was washed and soaked in the ethanol to remove the unreacted GA. The obtained film was air dried at room temperature to a constant weight. The infrared spectra and the value of swelling degree of obtained crosslinked film showed that glucomannan and kappa carrageenan was able to be crosslinked using glutaraldehyde by film immersion and curing method without catalyst. The crosslinked films were found to be pH sensitive, indicating a potential to be used in drug delivery polymer system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=crosslinking" title="crosslinking">crosslinking</a>, <a href="https://publications.waset.org/abstracts/search?q=glucomannan" title=" glucomannan"> glucomannan</a>, <a href="https://publications.waset.org/abstracts/search?q=carrageenan" title=" carrageenan"> carrageenan</a>, <a href="https://publications.waset.org/abstracts/search?q=swelling" title=" swelling"> swelling</a> </p> <a href="https://publications.waset.org/abstracts/28493/synthesis-of-crosslinked-konjac-glucomannan-and-kappa-carrageenan-film-with-glutaraldehyde" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28493.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">279</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">538</span> A Theoretical Study on Pain Assessment through Human Facial Expresion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mrinal%20Kanti%20Bhowmik">Mrinal Kanti Bhowmik</a>, <a href="https://publications.waset.org/abstracts/search?q=Debanjana%20Debnath%20Jr."> Debanjana Debnath Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Debotosh%20Bhattacharjee"> Debotosh Bhattacharjee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A facial expression is undeniably the human manners. It is a significant channel for human communication and can be applied to extract emotional features accurately. People in pain often show variations in facial expressions that are readily observable to others. A core of actions is likely to occur or to increase in intensity when people are in pain. To illustrate the changes in the facial appearance, a system known as Facial Action Coding System (FACS) is pioneered by Ekman and Friesen for human observers. According to Prkachin and Solomon, a set of such actions carries the bulk of information about pain. Thus, the Prkachin and Solomon pain intensity (PSPI) metric is defined. So, it is very important to notice that facial expressions, being a behavioral source in communication media, provide an important opening into the issues of non-verbal communication in pain. People express their pain in many ways, and this pain behavior is the basis on which most inferences about pain are drawn in clinical and research settings. Hence, to understand the roles of different pain behaviors, it is essential to study the properties. For the past several years, the studies are concentrated on the properties of one specific form of pain behavior i.e. facial expression. This paper represents a comprehensive study on pain assessment that can model and estimate the intensity of pain that the patient is suffering. It also reviews the historical background of different pain assessment techniques in the context of painful expressions. Different approaches incorporate FACS from psychological views and a pain intensity score using the PSPI metric in pain estimation. This paper investigates in depth analysis of different approaches used in pain estimation and presents different observations found from each technique. It also offers a brief study on different distinguishing features of real and fake pain. Therefore, the necessity of the study lies in the emerging fields of painful face assessment in clinical settings. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20action%20coding%20system%20%28FACS%29" title="facial action coding system (FACS)">facial action coding system (FACS)</a>, <a href="https://publications.waset.org/abstracts/search?q=pain" title=" pain"> pain</a>, <a href="https://publications.waset.org/abstracts/search?q=pain%20behavior" title=" pain behavior"> pain behavior</a>, <a href="https://publications.waset.org/abstracts/search?q=Prkachin%20and%20Solomon%20pain%20intensity%20%28PSPI%29" title=" Prkachin and Solomon pain intensity (PSPI)"> Prkachin and Solomon pain intensity (PSPI)</a> </p> <a href="https://publications.waset.org/abstracts/22187/a-theoretical-study-on-pain-assessment-through-human-facial-expresion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22187.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">346</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">537</span> Conductive Clay Nanocomposite Using Smectite and Poly(O-Anisidine)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20%C5%9Eahi%CC%87n">M. Şahi̇n</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Erdem"> E. Erdem</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Sa%C3%A7ak"> M. Saçak</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, Na-smectite crystals purificated of bentonite were used after being swelling with benzyltributylammonium bromide (BTBAB) as alkyl ammonium salt. Swelling process was carried out using 0.2 g of BTBAB for smectite of 0.8 g with 4 h of mixing time after investigated conditions such as mixing time, the swelling agent amount. Then, the conductive poly(o-anisidine) (POA)/smectite nanocomposite was prepared in the presence of swollen Na-smectite using ammonium persulfate (APS) as oxidant in aqueous acidic medium. The POA content and conductivity of the prepared nanocomposite were systematically investigated as a function of polymerization conditions such as the treatment time of swollen smectite in monomer solution and o-anisidine/APS mol ratio. POA/smectite nanocomposite was characterized by XRD, FTIR and SEM techniques and was compared separately with components of composite. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clay" title="clay">clay</a>, <a href="https://publications.waset.org/abstracts/search?q=composite" title=" composite"> composite</a>, <a href="https://publications.waset.org/abstracts/search?q=conducting%20polymer" title=" conducting polymer"> conducting polymer</a>, <a href="https://publications.waset.org/abstracts/search?q=poly%28o-anisidine%29" title=" poly(o-anisidine) "> poly(o-anisidine) </a> </p> <a href="https://publications.waset.org/abstracts/37132/conductive-clay-nanocomposite-using-smectite-and-polyo-anisidine" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/37132.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">325</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">536</span> Thermo-Elastic and Self-Healing Polyacrylamide: 2D Polymer Composite Hydrogels for Water Shutoff Treatment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Edreese%20H.%20Alsharaeh">Edreese H. Alsharaeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Feven%20Mattews%20Michael"> Feven Mattews Michael</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayman%20Almohsin"> Ayman Almohsin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Self-healing hydrogels have many advantages since they can resist various types of stresses, including tension, compression, and shear, making them attractive for various applications. In this study, thermo-elastic and self-healing polymer composite hydrogels were prepared from polyacrylamide (PAM) and 2D fillers using in-situ method. In addition, the PAM and fillers were prepared in presence of organic crosslinkers, i.e., hydroquinone (HQ) and hexamethylenediamine (HMT). The swelling behavior of the prepared hydrogels was studied by hydrating the dried hydrogels. The thermal and rheological properties of the prepared hydrogels were evaluated before and after swelling study using thermogravimetric analysis, differential scanning calorimetric technique and dynamic mechanical analysis. From the results obtained, incorporating fillers into the PAM matrix enhanced the swelling degree of the hydrogels with satisfactory mechanical properties, attaining up to 77% self-healing efficiency compared to the neat-PAM (i.e., 29%). This, in turn, indicates addition of 2D fillers improved self-healing properties of the polymer hydrogel, thus, making the prepared hydrogels applicable for water shutoff treatments under high temperature. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=polymer%20hydrogels" title="polymer hydrogels">polymer hydrogels</a>, <a href="https://publications.waset.org/abstracts/search?q=2D%20fillers" title=" 2D fillers"> 2D fillers</a>, <a href="https://publications.waset.org/abstracts/search?q=elastic%20self-healing%20hydrogels" title=" elastic self-healing hydrogels"> elastic self-healing hydrogels</a>, <a href="https://publications.waset.org/abstracts/search?q=water%20shutoff" title=" water shutoff"> water shutoff</a>, <a href="https://publications.waset.org/abstracts/search?q=swelling%20properties" title=" swelling properties"> swelling properties</a> </p> <a href="https://publications.waset.org/abstracts/110794/thermo-elastic-and-self-healing-polyacrylamide-2d-polymer-composite-hydrogels-for-water-shutoff-treatment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110794.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">145</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">535</span> Gender Recognition with Deep Belief Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiaoqi%20Jia">Xiaoqi Jia</a>, <a href="https://publications.waset.org/abstracts/search?q=Qing%20Zhu"> Qing Zhu</a>, <a href="https://publications.waset.org/abstracts/search?q=Hao%20Zhang"> Hao Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Su%20Yang"> Su Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A gender recognition system is able to tell the gender of the given person through a few of frontal facial images. An effective gender recognition approach enables to improve the performance of many other applications, including security monitoring, human-computer interaction, image or video retrieval and so on. In this paper, we present an effective method for gender classification task in frontal facial images based on deep belief networks (DBNs), which can pre-train model and improve accuracy a little bit. Our experiments have shown that the pre-training method with DBNs for gender classification task is feasible and achieves a little improvement of accuracy on FERET and CAS-PEAL-R1 facial datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gender%20recognition" title="gender recognition">gender recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=beep%20belief%20net-works" title=" beep belief net-works"> beep belief net-works</a>, <a href="https://publications.waset.org/abstracts/search?q=semi-supervised%20learning" title=" semi-supervised learning"> semi-supervised learning</a>, <a href="https://publications.waset.org/abstracts/search?q=greedy-layer%20wise%20RBMs" title=" greedy-layer wise RBMs"> greedy-layer wise RBMs</a> </p> <a href="https://publications.waset.org/abstracts/56147/gender-recognition-with-deep-belief-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56147.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">453</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">534</span> Improved Feature Extraction Technique for Handling Occlusion in Automatic Facial Expression Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khadijat%20T.%20Bamigbade">Khadijat T. Bamigbade</a>, <a href="https://publications.waset.org/abstracts/search?q=Olufade%20F.%20W.%20Onifade"> Olufade F. W. Onifade</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The field of automatic facial expression analysis has been an active research area in the last two decades. Its vast applicability in various domains has drawn so much attention into developing techniques and dataset that mirror real life scenarios. Many techniques such as Local Binary Patterns and its variants (CLBP, LBP-TOP) and lately, deep learning techniques, have been used for facial expression recognition. However, the problem of occlusion has not been sufficiently handled, making their results not applicable in real life situations. This paper develops a simple, yet highly efficient method tagged Local Binary Pattern-Histogram of Gradient (LBP-HOG) with occlusion detection in face image, using a multi-class SVM for Action Unit and in turn expression recognition. Our method was evaluated on three publicly available datasets which are JAFFE, CK, SFEW. Experimental results showed that our approach performed considerably well when compared with state-of-the-art algorithms and gave insight to occlusion detection as a key step to handling expression in wild. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20facial%20expression%20analysis" title="automatic facial expression analysis">automatic facial expression analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20binary%20pattern" title=" local binary pattern"> local binary pattern</a>, <a href="https://publications.waset.org/abstracts/search?q=LBP-HOG" title=" LBP-HOG"> LBP-HOG</a>, <a href="https://publications.waset.org/abstracts/search?q=occlusion%20detection" title=" occlusion detection"> occlusion detection</a> </p> <a href="https://publications.waset.org/abstracts/105048/improved-feature-extraction-technique-for-handling-occlusion-in-automatic-facial-expression-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/105048.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">169</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">533</span> Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lanchi%20Xie">Lanchi Xie</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhihui%20Li"> Zhihui Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhigang%20Li"> Zhigang Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Guiqiang%20Wang"> Guiqiang Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Lei%20Xu"> Lei Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuwen%20Yan"> Yuwen Yan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=likelihood%20%20ratio" title="likelihood ratio">likelihood ratio</a>, <a href="https://publications.waset.org/abstracts/search?q=automated%20facial%20recognition" title=" automated facial recognition"> automated facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20comparison" title=" facial comparison"> facial comparison</a>, <a href="https://publications.waset.org/abstracts/search?q=biometrics" title=" biometrics"> biometrics</a> </p> <a href="https://publications.waset.org/abstracts/110802/human-machine-cooperation-in-facial-comparison-based-on-likelihood-scores" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110802.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">130</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">532</span> The Effect of Experimentally Induced Stress on Facial Recognition Ability of Security Personnel’s</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zunjarrao%20Kadam">Zunjarrao Kadam</a>, <a href="https://publications.waset.org/abstracts/search?q=Vikas%20Minchekar"> Vikas Minchekar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The facial recognition is an important task in criminal investigation procedure. The security guards-constantly watching the persons-can help to identify the suspected accused. The forensic psychologists are tackled such cases in the criminal justice system. The security personnel may loss their ability to correctly identify the persons due to constant stress while performing the duty. The present study aimed at to identify the effect of experimentally induced stress on facial recognition ability of security personnel’s. For this study 50, security guards from Sangli, Miraj & Jaysingpur city of the Maharashtra States of India were recruited in the experimental study. The randomized two group design was employed to carry out the research. In the initial condition twenty identity card size photographs were shown to both groups. Afterward, artificial stress was induced in the experimental group through the difficultpuzzle-solvingtask in a limited period. In the second condition, both groups were presented earlier photographs with another additional thirty new photographs. The subjects were asked to recognize the photographs which are shown earliest. The analyzed data revealed that control group has ahighest mean score of facial recognition than experimental group. The results were discussed in the present research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=experimentally%20induced%20stress" title="experimentally induced stress">experimentally induced stress</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=cognition" title=" cognition"> cognition</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20personnel" title=" security personnel"> security personnel</a> </p> <a href="https://publications.waset.org/abstracts/60784/the-effect-of-experimentally-induced-stress-on-facial-recognition-ability-of-security-personnels" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/60784.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">261</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">531</span> Forensic Comparison of Facial Images for Human Identification </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20P.%20Gangwar">D. P. Gangwar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Identification of human through facial images has got great importance in forensic science. The video recordings, CCTV footage, passports, driver licenses and other related documents are invariably sent to the laboratory for comparison of the questioned photographs as well as video recordings with suspected photographs/recordings to prove the identity of a person. More than 300 questioned and 300 control photographs received in actual crime cases, received from various investigation agencies, have been compared by me so far using various familiar analysis and comparison techniques such as Holistic comparison, Morphological analysis, Photo-anthropometry and superimposition. On the basis of findings obtained during the examination huge photo exhibits, a realistic and comprehensive technique has been proposed which could be very useful for forensic. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CCTV%20Images" title="CCTV Images">CCTV Images</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20features" title=" facial features"> facial features</a>, <a href="https://publications.waset.org/abstracts/search?q=photo-anthropometry" title=" photo-anthropometry"> photo-anthropometry</a>, <a href="https://publications.waset.org/abstracts/search?q=superimposition" title=" superimposition"> superimposition</a> </p> <a href="https://publications.waset.org/abstracts/31353/forensic-comparison-of-facial-images-for-human-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31353.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">529</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">530</span> Tensor Deep Stacking Neural Networks and Bilinear Mapping Based Speech Emotion Classification Using Facial Electromyography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20S.%20Jagadeesh%20Kumar">P. S. Jagadeesh Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Yang%20Yung"> Yang Yung</a>, <a href="https://publications.waset.org/abstracts/search?q=Wenli%20Hu"> Wenli Hu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech emotion classification is a dominant research field in finding a sturdy and profligate classifier appropriate for different real-life applications. This effort accentuates on classifying different emotions from speech signal quarried from the features related to pitch, formants, energy contours, jitter, shimmer, spectral, perceptual and temporal features. Tensor deep stacking neural networks were supported to examine the factors that influence the classification success rate. Facial electromyography signals were composed of several forms of focuses in a controlled atmosphere by means of audio-visual stimuli. Proficient facial electromyography signals were pre-processed using moving average filter, and a set of arithmetical features were excavated. Extracted features were mapped into consistent emotions using bilinear mapping. With facial electromyography signals, a database comprising diverse emotions will be exposed with a suitable fine-tuning of features and training data. A success rate of 92% can be attained deprived of increasing the system connivance and the computation time for sorting diverse emotional states. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=speech%20emotion%20classification" title="speech emotion classification">speech emotion classification</a>, <a href="https://publications.waset.org/abstracts/search?q=tensor%20deep%20stacking%20neural%20networks" title=" tensor deep stacking neural networks"> tensor deep stacking neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20electromyography" title=" facial electromyography"> facial electromyography</a>, <a href="https://publications.waset.org/abstracts/search?q=bilinear%20mapping" title=" bilinear mapping"> bilinear mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual%20stimuli" title=" audio-visual stimuli"> audio-visual stimuli</a> </p> <a href="https://publications.waset.org/abstracts/78499/tensor-deep-stacking-neural-networks-and-bilinear-mapping-based-speech-emotion-classification-using-facial-electromyography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78499.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">254</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">529</span> A Neuron Model of Facial Recognition and Detection of an Authorized Entity Using Machine Learning System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20K.%20Adedeji">J. K. Adedeji</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20O.%20Oyekanmi"> M. O. Oyekanmi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper has critically examined the use of Machine Learning procedures in curbing unauthorized access into valuable areas of an organization. The use of passwords, pin codes, user&rsquo;s identification in recent times has been partially successful in curbing crimes involving identities, hence the need for the design of a system which incorporates biometric characteristics such as DNA and pattern recognition of variations in facial expressions. The facial model used is the OpenCV library which is based on the use of certain physiological features, the Raspberry Pi 3 module is used to compile the OpenCV library, which extracts and stores the detected faces into the datasets directory through the use of camera. The model is trained with 50 epoch run in the database and recognized by the Local Binary Pattern Histogram (LBPH) recognizer contained in the OpenCV. The training algorithm used by the neural network is back propagation coded using python algorithmic language with 200 epoch runs to identify specific resemblance in the exclusive OR (XOR) output neurons. The research however confirmed that physiological parameters are better effective measures to curb crimes relating to identities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometric%20characters" title="biometric characters">biometric characters</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title=" OpenCV"> OpenCV</a> </p> <a href="https://publications.waset.org/abstracts/93018/a-neuron-model-of-facial-recognition-and-detection-of-an-authorized-entity-using-machine-learning-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93018.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">256</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">528</span> Facial Recognition on the Basis of Facial Fragments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tetyana%20Baydyk">Tetyana Baydyk</a>, <a href="https://publications.waset.org/abstracts/search?q=Ernst%20Kussul"> Ernst Kussul</a>, <a href="https://publications.waset.org/abstracts/search?q=Sandra%20Bonilla%20Meza"> Sandra Bonilla Meza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild<em>) </em>face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title="face recognition">face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=labeled%20faces%20in%20the%20wild%20%28LFW%29%20database" title=" labeled faces in the wild (LFW) database"> labeled faces in the wild (LFW) database</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20local%20descriptor%20%28RLD%29" title=" random local descriptor (RLD)"> random local descriptor (RLD)</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20features" title=" random features"> random features</a> </p> <a href="https://publications.waset.org/abstracts/50117/facial-recognition-on-the-basis-of-facial-fragments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50117.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">360</span> </span> </div> </div> <ul class="pagination"> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20swelling&amp;page=1" rel="prev">&lsaquo;</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20swelling&amp;page=1">1</a></li> <li class="page-item active"><span class="page-link">2</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20swelling&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20swelling&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20swelling&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20swelling&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20swelling&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20swelling&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20swelling&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20swelling&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20swelling&amp;page=19">19</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20swelling&amp;page=20">20</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=facial%20swelling&amp;page=3" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10