CINXE.COM
Search results for: face detection
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: face detection</title> <meta name="description" content="Search results for: face detection"> <meta name="keywords" content="face detection"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="face detection" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="face detection"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 6078</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: face detection</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6078</span> The Effect of Pixelation on Face Detection: Evidence from Eye Movements </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kaewmart%20Pongakkasira">Kaewmart Pongakkasira</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigated how different levels of pixelation affect face detection in natural scenes. Eye movements and reaction times, while observers searched for faces in natural scenes rendered in different ranges of pixels, were recorded. Detection performance for coarse visual detail at lower pixel size (3 x 3) was better than with very blurred detail carried by higher pixel size (9 x 9). The result is consistent with the notion that face detection relies on gross detail information of face-shape template, containing crude shape structure and features. In contrast, detection was impaired when face shape and features are obscured. However, it was considered that the degradation of scenic information might also contribute to the effect. In the next experiment, a more direct measurement of the effect of pixelation on face detection, only the embedded face photographs, but not the scene background, will be filtered. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=eye%20movements" title="eye movements">eye movements</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=face-shape%20information" title=" face-shape information"> face-shape information</a>, <a href="https://publications.waset.org/abstracts/search?q=pixelation" title=" pixelation"> pixelation</a> </p> <a href="https://publications.waset.org/abstracts/54704/the-effect-of-pixelation-on-face-detection-evidence-from-eye-movements" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54704.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">317</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6077</span> Improvements in OpenCV's Viola Jones Algorithm in Face Detection–Skin Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jyoti%20Bharti">Jyoti Bharti</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20K.%20Gupta"> M. K. Gupta</a>, <a href="https://publications.waset.org/abstracts/search?q=Astha%20Jain"> Astha Jain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a new improved approach for false positives filtering of detected face images on OpenCV’s Viola Jones Algorithm In this approach, for Filtering of False Positives, Skin Detection in two colour spaces i.e. HSV (Hue, Saturation and Value) and YCrCb (Y is luma component and Cr- red difference, Cb- Blue difference) is used. As a result, it is found that false detection has been reduced. Our proposed method reaches the accuracy of about 98.7%. Thus, a better recognition rate is achieved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title="face detection">face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Viola%20Jones" title=" Viola Jones"> Viola Jones</a>, <a href="https://publications.waset.org/abstracts/search?q=false%20positives" title=" false positives"> false positives</a>, <a href="https://publications.waset.org/abstracts/search?q=OpenCV" title=" OpenCV"> OpenCV</a> </p> <a href="https://publications.waset.org/abstracts/48849/improvements-in-opencvs-viola-jones-algorithm-in-face-detection-skin-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/48849.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">406</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6076</span> An MrPPG Method for Face Anti-Spoofing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lan%20Zhang">Lan Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Cailing%20Zhang"> Cailing Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, many face anti-spoofing algorithms have high detection accuracy when detecting 2D face anti-spoofing or 3D mask face anti-spoofing alone in the field of face anti-spoofing, but their detection performance is greatly reduced in multidimensional and cross-datasets tests. The rPPG method used for face anti-spoofing uses the unique vital information of real face to judge real faces and face anti-spoofing, so rPPG method has strong stability compared with other methods, but its detection rate of 2D face anti-spoofing needs to be improved. Therefore, in this paper, we improve an rPPG(Remote Photoplethysmography) method(MrPPG) for face anti-spoofing which through color space fusion, using the correlation of pulse signals between real face regions and background regions, and introducing the cyclic neural network (LSTM) method to improve accuracy in 2D face anti-spoofing. Meanwhile, the MrPPG also has high accuracy and good stability in face anti-spoofing of multi-dimensional and cross-data datasets. The improved method was validated on Replay-Attack, CASIA-FASD, Siw and HKBU_MARs_V2 datasets, the experimental results show that the performance and stability of the improved algorithm proposed in this paper is superior to many advanced algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20anti-spoofing" title="face anti-spoofing">face anti-spoofing</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20presentation%20attack%20detection" title=" face presentation attack detection"> face presentation attack detection</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20photoplethysmography" title=" remote photoplethysmography"> remote photoplethysmography</a>, <a href="https://publications.waset.org/abstracts/search?q=MrPPG" title=" MrPPG"> MrPPG</a> </p> <a href="https://publications.waset.org/abstracts/144563/an-mrppg-method-for-face-anti-spoofing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144563.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">178</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6075</span> Design and Implementation of an Image Based System to Enhance the Security of ATM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seyed%20Nima%20Tayarani%20Bathaie">Seyed Nima Tayarani Bathaie</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, an image-receiving system was designed and implemented through optimization of object detection algorithms using Haar features. This optimized algorithm served as face and eye detection separately. Then, cascading them led to a clear image of the user. Utilization of this feature brought about higher security by preventing fraud. This attribute results from the fact that services will be given to the user on condition that a clear image of his face has already been captured which would exclude the inappropriate person. In order to expedite processing and eliminating unnecessary ones, the input image was compressed, a motion detection function was included in the program, and detection window size was confined. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=face%20detection%20algorithm" title="face detection algorithm">face detection algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=Haar%20features" title=" Haar features"> Haar features</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20of%20ATM" title=" security of ATM"> security of ATM</a> </p> <a href="https://publications.waset.org/abstracts/3011/design-and-implementation-of-an-image-based-system-to-enhance-the-security-of-atm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3011.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">419</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6074</span> An Erudite Technique for Face Detection and Recognition Using Curvature Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Jagadeesh%20Kumar">S. Jagadeesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Face detection and recognition is an authoritative technology for image database management, video surveillance, and human computer interface (HCI). Face recognition is a rapidly nascent method, which has been extensively discarded in forensics such as felonious identification, tenable entree, and custodial security. This paper recommends an erudite technique using curvature analysis (CA) that has less false positives incidence, operative in different light environments and confiscates the artifacts that are introduced during image acquisition by ring correction in polar coordinate (RCP) method. This technique affronts mean and median filtering technique to remove the artifacts but it works in polar coordinate during image acquisition. Investigational fallouts for face detection and recognition confirms decent recitation even in diagonal orientation and stance variation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=curvature%20analysis" title="curvature analysis">curvature analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=ring%20correction%20in%20polar%20coordinate%20method" title=" ring correction in polar coordinate method"> ring correction in polar coordinate method</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20computer%20interaction" title=" human computer interaction"> human computer interaction</a> </p> <a href="https://publications.waset.org/abstracts/70748/an-erudite-technique-for-face-detection-and-recognition-using-curvature-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70748.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">286</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6073</span> Prevention of Road Accidents by Computerized Drowsiness Detection System </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ujjal%20Chattaraj">Ujjal Chattaraj</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20C.%20Dasbebartta"> P. C. Dasbebartta</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Bhuyan"> S. Bhuyan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to propose a method to detect the action of the driver’s eyes, using the concept of face detection. There are three major key contributing methods which can rapidly process the framework of the facial image and hence produce results which further can program the reactions of the vehicles as pre-programmed for the traffic safety. This paper compares and analyses the methods on the basis of their reaction time and their ability to deal with fluctuating images of the driver. The program used in this study is simple and efficient, built using the AdaBoost learning algorithm. Through this program, the system would be able to discard background regions and focus on the face-like regions. The results are analyzed on a common computer which makes it feasible for the end users. The application domain of this experiment is quite wide, such as detection of drowsiness or influence of alcohols in drivers or detection for the case of identification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=AdaBoost%20learning%20algorithm" title="AdaBoost learning algorithm">AdaBoost learning algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=framework" title=" framework"> framework</a>, <a href="https://publications.waset.org/abstracts/search?q=traffic%20safety" title=" traffic safety"> traffic safety</a> </p> <a href="https://publications.waset.org/abstracts/97960/prevention-of-road-accidents-by-computerized-drowsiness-detection-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/97960.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">157</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6072</span> Analysis of Facial Expressions with Amazon Rekognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kashika%20P.%20H.">Kashika P. H.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The development of computer vision systems has been greatly aided by the efficient and precise detection of images and videos. Although the ability to recognize and comprehend images is a strength of the human brain, employing technology to tackle this issue is exceedingly challenging. In the past few years, the use of Deep Learning algorithms to treat object detection has dramatically expanded. One of the key issues in the realm of image recognition is the recognition and detection of certain notable people from randomly acquired photographs. Face recognition uses a way to identify, assess, and compare faces for a variety of purposes, including user identification, user counting, and classification. With the aid of an accessible deep learning-based API, this article intends to recognize various faces of people and their facial descriptors more accurately. The purpose of this study is to locate suitable individuals and deliver accurate information about them by using the Amazon Rekognition system to identify a specific human from a vast image dataset. We have chosen the Amazon Rekognition system, which allows for more accurate face analysis, face comparison, and face search, to tackle this difficulty. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amazon%20rekognition" title="Amazon rekognition">Amazon rekognition</a>, <a href="https://publications.waset.org/abstracts/search?q=API" title=" API"> API</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20detection" title=" text detection"> text detection</a> </p> <a href="https://publications.waset.org/abstracts/174012/analysis-of-facial-expressions-with-amazon-rekognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174012.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">104</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6071</span> Burnout Recognition for Call Center Agents by Using Skin Color Detection with Hand Poses </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=El%20Sayed%20A.%20Sharara">El Sayed A. Sharara</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Tsuji"> A. Tsuji</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Terada"> K. Terada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Call centers have been expanding and they have influence on activation in various markets increasingly. A call center’s work is known as one of the most demanding and stressful jobs. In this paper, we propose the fatigue detection system in order to detect burnout of call center agents in the case of a neck pain and upper back pain. Our proposed system is based on the computer vision technique combined skin color detection with the Viola-Jones object detector. To recognize the gesture of hand poses caused by stress sign, the YCbCr color space is used to detect the skin color region including face and hand poses around the area related to neck ache and upper back pain. A cascade of clarifiers by Viola-Jones is used for face recognition to extract from the skin color region. The detection of hand poses is given by the evaluation of neck pain and upper back pain by using skin color detection and face recognition method. The system performance is evaluated using two groups of dataset created in the laboratory to simulate call center environment. Our call center agent burnout detection system has been implemented by using a web camera and has been processed by MATLAB. From the experimental results, our system achieved 96.3% for upper back pain detection and 94.2% for neck pain detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=call%20center%20agents" title="call center agents">call center agents</a>, <a href="https://publications.waset.org/abstracts/search?q=fatigue" title=" fatigue"> fatigue</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20color%20detection" title=" skin color detection"> skin color detection</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a> </p> <a href="https://publications.waset.org/abstracts/74913/burnout-recognition-for-call-center-agents-by-using-skin-color-detection-with-hand-poses" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74913.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">293</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6070</span> Training of Future Computer Science Teachers Based on Machine Learning Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Meruert%20Serik">Meruert Serik</a>, <a href="https://publications.waset.org/abstracts/search?q=Nassipzhan%20Duisegaliyeva"> Nassipzhan Duisegaliyeva</a>, <a href="https://publications.waset.org/abstracts/search?q=Danara%20Tleumagambetova"> Danara Tleumagambetova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The article highlights and describes the characteristic features of real-time face detection in images and videos using machine learning algorithms. Students of educational programs reviewed the research work "6B01511-Computer Science", "7M01511-Computer Science", "7M01525- STEM Education," and "8D01511-Computer Science" of Eurasian National University named after L.N. Gumilyov. As a result, the advantages and disadvantages of Haar Cascade (Haar Cascade OpenCV), HoG SVM (Histogram of Oriented Gradients, Support Vector Machine), and MMOD CNN Dlib (Max-Margin Object Detection, convolutional neural network) detectors used for face detection were determined. Dlib is a general-purpose cross-platform software library written in the programming language C++. It includes detectors used for determining face detection. The Cascade OpenCV algorithm is efficient for fast face detection. The considered work forms the basis for the development of machine learning methods by future computer science teachers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=algorithm" title="algorithm">algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=education" title=" education"> education</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/170539/training-of-future-computer-science-teachers-based-on-machine-learning-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170539.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">73</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6069</span> A Dynamic Neural Network Model for Accurate Detection of Masked Faces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Oladapo%20Tolulope%20Ibitoye">Oladapo Tolulope Ibitoye</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Neural networks have become prominent and widely engaged in algorithmic-based machine learning networks. They are perfect in solving day-to-day issues to a certain extent. Neural networks are computing systems with several interconnected nodes. One of the numerous areas of application of neural networks is object detection. This is a prominent area due to the coronavirus disease pandemic and the post-pandemic phases. Wearing a face mask in public slows the spread of the virus, according to experts’ submission. This calls for the development of a reliable and effective model for detecting face masks on people's faces during compliance checks. The existing neural network models for facemask detection are characterized by their black-box nature and large dataset requirement. The highlighted challenges have compromised the performance of the existing models. The proposed model utilized Faster R-CNN Model on Inception V3 backbone to reduce system complexity and dataset requirement. The model was trained and validated with very few datasets and evaluation results shows an overall accuracy of 96% regardless of skin tone. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title="convolutional neural network">convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20mask" title=" face mask"> face mask</a>, <a href="https://publications.waset.org/abstracts/search?q=masked%20faces" title=" masked faces"> masked faces</a> </p> <a href="https://publications.waset.org/abstracts/163866/a-dynamic-neural-network-model-for-accurate-detection-of-masked-faces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163866.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">68</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6068</span> Face Tracking and Recognition Using Deep Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Degale%20Desta">Degale Desta</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Jian"> Cheng Jian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most important factor in identifying a person is their face. Even identical twins have their own distinct faces. As a result, identification and face recognition are needed to tell one person from another. A face recognition system is a verification tool used to establish a person's identity using biometrics. Nowadays, face recognition is a common technique used in a variety of applications, including home security systems, criminal identification, and phone unlock systems. This system is more secure because it only requires a facial image instead of other dependencies like a key or card. Face detection and face identification are the two phases that typically make up a human recognition system.The idea behind designing and creating a face recognition system using deep learning with Azure ML Python's OpenCV is explained in this paper. Face recognition is a task that can be accomplished using deep learning, and given the accuracy of this method, it appears to be a suitable approach. To show how accurate the suggested face recognition system is, experimental results are given in 98.46% accuracy using Fast-RCNN Performance of algorithms under different training conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=identification" title=" identification"> identification</a>, <a href="https://publications.waset.org/abstracts/search?q=fast-RCNN" title=" fast-RCNN"> fast-RCNN</a> </p> <a href="https://publications.waset.org/abstracts/163134/face-tracking-and-recognition-using-deep-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163134.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6067</span> Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anoop%20T.%20R.">Anoop T. R.</a>, <a href="https://publications.waset.org/abstracts/search?q=Otman%20Basir"> Otman Basir</a>, <a href="https://publications.waset.org/abstracts/search?q=Robert%20F.%20Hess"> Robert F. Hess</a>, <a href="https://publications.waset.org/abstracts/search?q=Eileen%20E.%20Birch"> Eileen E. Birch</a>, <a href="https://publications.waset.org/abstracts/search?q=Brooke%20A.%20Koritala"> Brooke A. Koritala</a>, <a href="https://publications.waset.org/abstracts/search?q=Reed%20M.%20Jost"> Reed M. Jost</a>, <a href="https://publications.waset.org/abstracts/search?q=Becky%20Luu"> Becky Luu</a>, <a href="https://publications.waset.org/abstracts/search?q=David%20Stager"> David Stager</a>, <a href="https://publications.waset.org/abstracts/search?q=Ben%20Thompson"> Ben Thompson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=strabismus" title="strabismus">strabismus</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title=" deep neural networks"> deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20landmarks" title=" facial landmarks"> facial landmarks</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20alignment" title=" face alignment"> face alignment</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG%2016" title=" VGG 16"> VGG 16</a>, <a href="https://publications.waset.org/abstracts/search?q=mask%20R-CNN" title=" mask R-CNN"> mask R-CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=pupil%20coordinates" title=" pupil coordinates"> pupil coordinates</a>, <a href="https://publications.waset.org/abstracts/search?q=angle%20deviation" title=" angle deviation"> angle deviation</a>, <a href="https://publications.waset.org/abstracts/search?q=horizontal%20and%20vertical%20deviation" title=" horizontal and vertical deviation"> horizontal and vertical deviation</a> </p> <a href="https://publications.waset.org/abstracts/170835/detection-and-classification-strabismus-using-convolutional-neural-network-and-spatial-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170835.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6066</span> Strabismus Detection Using Eye Alignment Stability</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anoop%20T.%20R.">Anoop T. R.</a>, <a href="https://publications.waset.org/abstracts/search?q=Otman%20Basir"> Otman Basir</a>, <a href="https://publications.waset.org/abstracts/search?q=Robert%20F.%20Hess"> Robert F. Hess</a>, <a href="https://publications.waset.org/abstracts/search?q=Ben%20Thompson"> Ben Thompson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. Currently, many children with strabismus remain undiagnosed until school entry because current automated screening methods have limited success in the preschool age range. A method for strabismus detection using eye alignment stability (EAS) is proposed. This method starts with face detection, followed by facial landmark detection, eye region segmentation, eye gaze extraction, and eye alignment stability estimation. Binarization and morphological operations are performed for segmenting the pupil region from the eye. After finding the EAS, its absolute value is used to differentiate the strabismic eye from the non-strabismic eye. If the value of the eye alignment stability is greater than a particular threshold, then the eyes are misaligned, and if its value is less than the threshold, the eyes are aligned. The method was tested on 175 strabismic and non-strabismic images obtained from Kaggle and Google Photos. The strabismic eye is taken as a positive class, and the non-strabismic eye is taken as a negative class. The test produced a true positive rate of 100% and a false positive rate of 7.69%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=strabismus" title="strabismus">strabismus</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20landmarks" title=" facial landmarks"> facial landmarks</a>, <a href="https://publications.waset.org/abstracts/search?q=eye%20segmentation" title=" eye segmentation"> eye segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=eye%20gaze" title=" eye gaze"> eye gaze</a>, <a href="https://publications.waset.org/abstracts/search?q=binarization" title=" binarization"> binarization</a> </p> <a href="https://publications.waset.org/abstracts/177646/strabismus-detection-using-eye-alignment-stability" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177646.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6065</span> Iris Detection on RGB Image for Controlling Side Mirror</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Norzalina%20Othman">Norzalina Othman</a>, <a href="https://publications.waset.org/abstracts/search?q=Nurul%20Na%E2%80%99imy%20Wan"> Nurul Na’imy Wan</a>, <a href="https://publications.waset.org/abstracts/search?q=Azliza%20Mohd%20Rusli"> Azliza Mohd Rusli</a>, <a href="https://publications.waset.org/abstracts/search?q=Wan%20Noor%20Syahirah%20Meor%20Idris"> Wan Noor Syahirah Meor Idris</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Iris detection is a process where the position of the eyes is extracted from the face images. It is a current method used for many applications such as for security purpose and drowsiness detection. This paper proposes the use of eyes detection in controlling side mirror of motor vehicles. The eyes detection method aims to make driver easy to adjust the side mirrors automatically. The system will determine the midpoint coordinate of eyes detection on RGB (color) image and the input signal from y-coordinate will send it to controller in order to rotate the angle of side mirror on vehicle. The eye position was cropped and the coordinate of midpoint was successfully detected from the circle of iris detection using Viola Jones detection and circular Hough transform methods on RGB image. The coordinate of midpoint from the experiment are tested using controller to determine the angle of rotation on the side mirrors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iris%20detection" title="iris detection">iris detection</a>, <a href="https://publications.waset.org/abstracts/search?q=midpoint%20coordinates" title=" midpoint coordinates"> midpoint coordinates</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20images" title=" RGB images"> RGB images</a>, <a href="https://publications.waset.org/abstracts/search?q=side%20mirror" title=" side mirror"> side mirror</a> </p> <a href="https://publications.waset.org/abstracts/8133/iris-detection-on-rgb-image-for-controlling-side-mirror" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8133.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">423</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6064</span> Towards Integrating Statistical Color Features for Human Skin Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Zamri%20Osman">Mohd Zamri Osman</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Aizaini%20Maarof"> Mohd Aizaini Maarof</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Foad%20Rohani"> Mohd Foad Rohani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human skin detection recognized as the primary step in most of the applications such as face detection, illicit image filtering, hand recognition and video surveillance. The performance of any skin detection applications greatly relies on the two components: feature extraction and classification method. Skin color is the most vital information used for skin detection purpose. However, color feature alone sometimes could not handle images with having same color distribution with skin color. A color feature of pixel-based does not eliminate the skin-like color due to the intensity of skin and skin-like color fall under the same distribution. Hence, the statistical color analysis will be exploited such mean and standard deviation as an additional feature to increase the reliability of skin detector. In this paper, we studied the effectiveness of statistical color feature for human skin detection. Furthermore, the paper analyzed the integrated color and texture using eight classifiers with three color spaces of RGB, YCbCr, and HSV. The experimental results show that the integrating statistical feature using Random Forest classifier achieved a significant performance with an F1-score 0.969. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20space" title="color space">color space</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20detection" title=" skin detection"> skin detection</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20feature" title=" statistical feature"> statistical feature</a> </p> <a href="https://publications.waset.org/abstracts/43485/towards-integrating-statistical-color-features-for-human-skin-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43485.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6063</span> Obstacle Detection and Path Tracking Application for Disables</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aliya%20Ashraf">Aliya Ashraf</a>, <a href="https://publications.waset.org/abstracts/search?q=Mehreen%20Sirshar"> Mehreen Sirshar</a>, <a href="https://publications.waset.org/abstracts/search?q=Fatima%20Akhtar"> Fatima Akhtar</a>, <a href="https://publications.waset.org/abstracts/search?q=Farwa%20Kazmi"> Farwa Kazmi</a>, <a href="https://publications.waset.org/abstracts/search?q=Jawaria%20Wazir"> Jawaria Wazir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Vision, the basis for performing navigational tasks, is absent or greatly reduced in visually impaired people due to which they face many hurdles. For increasing the navigational capabilities of visually impaired people a desktop application ODAPTA is presented in this paper. The application uses camera to capture video from surroundings, apply various image processing algorithms to get information about path and obstacles, tracks them and delivers that information to user through voice commands. Experimental results show that the application works effectively for straight paths in daylight. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visually%20impaired" title="visually impaired">visually impaired</a>, <a href="https://publications.waset.org/abstracts/search?q=ODAPTA" title=" ODAPTA"> ODAPTA</a>, <a href="https://publications.waset.org/abstracts/search?q=Region%20of%20Interest%20%28ROI%29" title=" Region of Interest (ROI)"> Region of Interest (ROI)</a>, <a href="https://publications.waset.org/abstracts/search?q=driver%20fatigue" title=" driver fatigue"> driver fatigue</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=expression%20recognition" title=" expression recognition"> expression recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=CCD%20camera" title=" CCD camera"> CCD camera</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a> </p> <a href="https://publications.waset.org/abstracts/19807/obstacle-detection-and-path-tracking-application-for-disables" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19807.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">549</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6062</span> Curvelet Features with Mouth and Face Edge Ratios for Facial Expression Identification </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Kherchaoui">S. Kherchaoui</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Houacine"> A. Houacine</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a facial expression recognition system. It performs identification and classification of the seven basic expressions; happy, surprise, fear, disgust, sadness, anger, and neutral states. It consists of three main parts. The first one is the detection of a face and the corresponding facial features to extract the most expressive portion of the face, followed by a normalization of the region of interest. Then calculus of curvelet coefficients is performed with dimensionality reduction through principal component analysis. The resulting coefficients are combined with two ratios; mouth ratio and face edge ratio to constitute the whole feature vector. The third step is the classification of the emotional state using the SVM method in the feature space. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20identification" title="facial expression identification">facial expression identification</a>, <a href="https://publications.waset.org/abstracts/search?q=curvelet%20coefficient" title=" curvelet coefficient"> curvelet coefficient</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machine%20%28SVM%29" title=" support vector machine (SVM)"> support vector machine (SVM)</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition%20system" title=" recognition system"> recognition system</a> </p> <a href="https://publications.waset.org/abstracts/10311/curvelet-features-with-mouth-and-face-edge-ratios-for-facial-expression-identification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10311.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6061</span> Multimodal Employee Attendance Management System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khaled%20Mohammed">Khaled Mohammed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents novel face recognition and identification approaches for the real-time attendance management problem in large companies/factories and government institutions. The proposed uses the Minimum Ratio (MR) approach for employee identification. Capturing the authentic face variability from a sequence of video frames has been considered for the recognition of faces and resulted in system robustness against the variability of facial features. Experimental results indicated an improvement in the performance of the proposed system compared to the Previous approaches at a rate between 2% to 5%. In addition, it decreased the time two times if compared with the Previous techniques, such as Extreme Learning Machine (ELM) & Multi-Scale Structural Similarity index (MS-SSIM). Finally, it achieved an accuracy of 99%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attendance%20management%20system" title="attendance management system">attendance management system</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection%20and%20recognition" title=" face detection and recognition"> face detection and recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=live%20face%20recognition" title=" live face recognition"> live face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=minimum%20ratio" title=" minimum ratio"> minimum ratio</a> </p> <a href="https://publications.waset.org/abstracts/154996/multimodal-employee-attendance-management-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/154996.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6060</span> Using Machine Learning to Build a Real-Time COVID-19 Mask Safety Monitor</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yash%20Jain">Yash Jain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The US Center for Disease Control has recommended wearing masks to slow the spread of the virus. The research uses a video feed from a camera to conduct real-time classifications of whether or not a human is correctly wearing a mask, incorrectly wearing a mask, or not wearing a mask at all. Utilizing two distinct datasets from the open-source website Kaggle, a mask detection network had been trained. The first dataset that was used to train the model was titled 'Face Mask Detection' on Kaggle, where the dataset was retrieved from and the second dataset was titled 'Face Mask Dataset, which provided the data in a (YOLO Format)' so that the TinyYoloV3 model could be trained. Based on the data from Kaggle, two machine learning models were implemented and trained: a Tiny YoloV3 Real-time model and a two-stage neural network classifier. The two-stage neural network classifier had a first step of identifying distinct faces within the image, and the second step was a classifier to detect the state of the mask on the face and whether it was worn correctly, incorrectly, or no mask at all. The TinyYoloV3 was used for the live feed as well as for a comparison standpoint against the previous two-stage classifier and was trained using the darknet neural network framework. The two-stage classifier attained a mean average precision (MAP) of 80%, while the model trained using TinyYoloV3 real-time detection had a mean average precision (MAP) of 59%. Overall, both models were able to correctly classify stages/scenarios of no mask, mask, and incorrectly worn masks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=datasets" title="datasets">datasets</a>, <a href="https://publications.waset.org/abstracts/search?q=classifier" title=" classifier"> classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=mask-detection" title=" mask-detection"> mask-detection</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time" title=" real-time"> real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=TinyYoloV3" title=" TinyYoloV3"> TinyYoloV3</a>, <a href="https://publications.waset.org/abstracts/search?q=two-stage%20neural%20network%20classifier" title=" two-stage neural network classifier"> two-stage neural network classifier</a> </p> <a href="https://publications.waset.org/abstracts/137207/using-machine-learning-to-build-a-real-time-covid-19-mask-safety-monitor" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137207.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">161</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6059</span> Keyframe Extraction Using Face Quality Assessment and Convolution Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rahma%20Abed">Rahma Abed</a>, <a href="https://publications.waset.org/abstracts/search?q=Sahbi%20Bahroun"> Sahbi Bahroun</a>, <a href="https://publications.waset.org/abstracts/search?q=Ezzeddine%20Zagrouba"> Ezzeddine Zagrouba</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the huge amount of data in videos, extracting the relevant frames became a necessity and an essential step prior to performing face recognition. In this context, we propose a method for extracting keyframes from videos based on face quality and deep learning for a face recognition task. This method has two steps. We start by generating face quality scores for each face image based on the use of three face feature extractors, including Gabor, LBP, and HOG. The second step consists in training a Deep Convolutional Neural Network in a supervised manner in order to select the frames that have the best face quality. The obtained results show the effectiveness of the proposed method compared to the methods of the state of the art. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=keyframe%20extraction" title="keyframe extraction">keyframe extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20quality%20assessment" title=" face quality assessment"> face quality assessment</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20in%20video%20recognition" title=" face in video recognition"> face in video recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=convolution%20neural%20network" title=" convolution neural network"> convolution neural network</a> </p> <a href="https://publications.waset.org/abstracts/111347/keyframe-extraction-using-face-quality-assessment-and-convolution-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/111347.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">232</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6058</span> ANAC-id - Facial Recognition to Detect Fraud</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Giovanna%20Borges%20Bottino">Giovanna Borges Bottino</a>, <a href="https://publications.waset.org/abstracts/search?q=Luis%20Felipe%20Freitas%20do%20Nascimento%20Alves%20Teixeira"> Luis Felipe Freitas do Nascimento Alves Teixeira</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article aims to present a case study of the National Civil Aviation Agency (ANAC) in Brazil, ANAC-id. ANAC-id is the artificial intelligence algorithm developed for image analysis that recognizes standard images of unobstructed and uprighted face without sunglasses, allowing to identify potential inconsistencies. It combines YOLO architecture and 3 libraries in python - face recognition, face comparison, and deep face, providing robust analysis with high level of accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=deepface" title=" deepface"> deepface</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20compare" title=" face compare"> face compare</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLO" title=" YOLO"> YOLO</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/148459/anac-id-facial-recognition-to-detect-fraud" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148459.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">156</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6057</span> Adversarial Disentanglement Using Latent Classifier for Pose-Independent Representation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamed%20Alqahtani">Hamed Alqahtani</a>, <a href="https://publications.waset.org/abstracts/search?q=Manolya%20Kavakli-Thorne"> Manolya Kavakli-Thorne</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The large pose discrepancy is one of the critical challenges in face recognition during video surveillance. Due to the entanglement of pose attributes with identity information, the conventional approaches for pose-independent representation lack in providing quality results in recognizing largely posed faces. In this paper, we propose a practical approach to disentangle the pose attribute from the identity information followed by synthesis of a face using a classifier network in latent space. The proposed approach employs a modified generative adversarial network framework consisting of an encoder-decoder structure embedded with a classifier in manifold space for carrying out factorization on the latent encoding. It can be further generalized to other face and non-face attributes for real-life video frames containing faces with significant attribute variations. Experimental results and comparison with state of the art in the field prove that the learned representation of the proposed approach synthesizes more compelling perceptual images through a combination of adversarial and classification losses. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=disentanglement" title="disentanglement">disentanglement</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/108319/adversarial-disentanglement-using-latent-classifier-for-pose-independent-representation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108319.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6056</span> The Effect of Computer-Mediated vs. Face-to-Face Instruction on L2 Pragmatics: A Meta-Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marziyeh%20Yousefi">Marziyeh Yousefi</a>, <a href="https://publications.waset.org/abstracts/search?q=Hossein%20Nassaji"> Hossein Nassaji</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper reports the results of a meta-analysis of studies on the effects of instruction mode on learning second language pragmatics during the last decade (from 2006 to 2016). After establishing related inclusion/ exclusion criteria, 39 published studies were retrieved and included in the present meta-analysis. Studies were later coded for face-to-face and computer-assisted mode of instruction. Statistical procedures were applied to obtain effect sizes. It was found that Computer-Assisted-Language-Learning studies generated larger effects than Face-to-Face instruction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=meta-analysis" title="meta-analysis">meta-analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=effect%20size" title=" effect size"> effect size</a>, <a href="https://publications.waset.org/abstracts/search?q=L2%20pragmatics" title=" L2 pragmatics"> L2 pragmatics</a>, <a href="https://publications.waset.org/abstracts/search?q=comprehensive%20meta-analysis" title=" comprehensive meta-analysis"> comprehensive meta-analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=face-to-face" title=" face-to-face"> face-to-face</a>, <a href="https://publications.waset.org/abstracts/search?q=computer-assisted%20language%20learning" title=" computer-assisted language learning"> computer-assisted language learning</a> </p> <a href="https://publications.waset.org/abstracts/86038/the-effect-of-computer-mediated-vs-face-to-face-instruction-on-l2-pragmatics-a-meta-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86038.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">223</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6055</span> Content Based Face Sketch Images Retrieval in WHT, DCT, and DWT Transform Domain</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=W.%20S.%20Besbas">W. S. Besbas</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20A.%20Artemi"> M. A. Artemi</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20M.%20Salman"> R. M. Salman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Content based face sketch retrieval can be used to find images of criminals from their sketches for 'Crime Prevention'. This paper investigates the problem of CBIR of face sketch images in transform domain. Face sketch images that are similar to the query image are retrieved from the face sketch database. Features of the face sketch image are extracted in the spectrum domain of a selected transforms. These transforms are Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and Walsh Hadamard Transform (WHT). For the performance analyses of features selection methods three face images databases are used. These are 'Sheffield face database', 'Olivetti Research Laboratory (ORL) face database', and 'Indian face database'. The City block distance measure is used to evaluate the performance of the retrieval process. The investigation concludes that, the retrieval rate is database dependent. But in general, the DCT is the best. On the other hand, the WHT is the best with respect to the speed of retrieving images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Content%20Based%20Image%20Retrieval%20%28CBIR%29" title="Content Based Image Retrieval (CBIR)">Content Based Image Retrieval (CBIR)</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20sketch%20image%20retrieval" title=" face sketch image retrieval"> face sketch image retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=features%20selection%20for%20CBIR" title=" features selection for CBIR"> features selection for CBIR</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20retrieval%20in%20transform%20domain" title=" image retrieval in transform domain"> image retrieval in transform domain</a> </p> <a href="https://publications.waset.org/abstracts/8251/content-based-face-sketch-images-retrieval-in-wht-dct-and-dwt-transform-domain" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8251.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">493</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6054</span> Efficient Signal Detection Using QRD-M Based on Channel Condition in MIMO-OFDM System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jae-Jeong%20Kim">Jae-Jeong Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ki-Ro%20Kim"> Ki-Ro Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyoung-Kyu%20Song"> Hyoung-Kyu Song</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose an efficient signal detector that switches M parameter of QRD-M detection scheme is proposed for MIMO-OFDM system. The proposed detection scheme calculates the threshold by 1-norm condition number and then switches M parameter of QRD-M detection scheme according to channel information. If channel condition is bad, the parameter M is set to high value to increase the accuracy of detection. If channel condition is good, the parameter M is set to low value to reduce complexity of detection. Therefore, the proposed detection scheme has better trade off between BER performance and complexity than the conventional detection scheme. The simulation result shows that the complexity of proposed detection scheme is lower than QRD-M detection scheme with similar BER performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=MIMO-OFDM" title="MIMO-OFDM">MIMO-OFDM</a>, <a href="https://publications.waset.org/abstracts/search?q=QRD-M" title=" QRD-M"> QRD-M</a>, <a href="https://publications.waset.org/abstracts/search?q=channel%20condition" title=" channel condition"> channel condition</a>, <a href="https://publications.waset.org/abstracts/search?q=BER" title=" BER"> BER</a> </p> <a href="https://publications.waset.org/abstracts/3518/efficient-signal-detection-using-qrd-m-based-on-channel-condition-in-mimo-ofdm-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3518.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">370</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6053</span> Dynamic Log Parsing and Intelligent Anomaly Detection Method Combining Retrieval Augmented Generation and Prompt Engineering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Liu%20Linxin">Liu Linxin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As system complexity increases, log parsing and anomaly detection become more and more important in ensuring system stability. However, traditional methods often face the problems of insufficient adaptability and decreasing accuracy when dealing with rapidly changing log contents and unknown domains. To this end, this paper proposes an approach LogRAG, which combines RAG (Retrieval Augmented Generation) technology with Prompt Engineering for Large Language Models, applied to log analysis tasks to achieve dynamic parsing of logs and intelligent anomaly detection. By combining real-time information retrieval and prompt optimisation, this study significantly improves the adaptive capability of log analysis and the interpretability of results. Experimental results show that the method performs well on several public datasets, especially in the absence of training data, and significantly outperforms traditional methods. This paper provides a technical path for log parsing and anomaly detection, demonstrating significant theoretical value and application potential. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=log%20parsing" title="log parsing">log parsing</a>, <a href="https://publications.waset.org/abstracts/search?q=anomaly%20detection" title=" anomaly detection"> anomaly detection</a>, <a href="https://publications.waset.org/abstracts/search?q=retrieval-augmented%20generation" title=" retrieval-augmented generation"> retrieval-augmented generation</a>, <a href="https://publications.waset.org/abstracts/search?q=prompt%20engineering" title=" prompt engineering"> prompt engineering</a>, <a href="https://publications.waset.org/abstracts/search?q=LLMs" title=" LLMs"> LLMs</a> </p> <a href="https://publications.waset.org/abstracts/191047/dynamic-log-parsing-and-intelligent-anomaly-detection-method-combining-retrieval-augmented-generation-and-prompt-engineering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/191047.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">29</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6052</span> A Geometric Based Hybrid Approach for Facial Feature Localization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Priya%20Saha">Priya Saha</a>, <a href="https://publications.waset.org/abstracts/search?q=Sourav%20Dey%20Roy%20Jr."> Sourav Dey Roy Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Debotosh%20Bhattacharjee"> Debotosh Bhattacharjee</a>, <a href="https://publications.waset.org/abstracts/search?q=Mita%20Nasipuri"> Mita Nasipuri</a>, <a href="https://publications.waset.org/abstracts/search?q=Barin%20Kumar%20De"> Barin Kumar De</a>, <a href="https://publications.waset.org/abstracts/search?q=Mrinal%20Kanti%20Bhowmik"> Mrinal Kanti Bhowmik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometric face recognition technology (FRT) has gained a lot of attention due to its extensive variety of applications in both security and non-security perspectives. It has come into view to provide a secure solution in identification and verification of person identity. Although other biometric based methods like fingerprint scans, iris scans are available, FRT is verified as an efficient technology for its user-friendliness and contact freeness. Accurate facial feature localization plays an important role for many facial analysis applications including biometrics and emotion recognition. But, there are certain factors, which make facial feature localization a challenging task. On human face, expressions can be seen from the subtle movements of facial muscles and influenced by internal emotional states. These non-rigid facial movements cause noticeable alterations in locations of facial landmarks, their usual shapes, which sometimes create occlusions in facial feature areas making face recognition as a difficult problem. The paper proposes a new hybrid based technique for automatic landmark detection in both neutral and expressive frontal and near frontal face images. The method uses the concept of thresholding, sequential searching and other image processing techniques for locating the landmark points on the face. Also, a Graphical User Interface (GUI) based software is designed that could automatically detect 16 landmark points around eyes, nose and mouth that are mostly affected by the changes in facial muscles. The proposed system has been tested on widely used JAFFE and Cohn Kanade database. Also, the system is tested on DeitY-TU face database which is created in the Biometrics Laboratory of Tripura University under the research project funded by Department of Electronics & Information Technology, Govt. of India. The performance of the proposed method has been done in terms of error measure and accuracy. The method has detection rate of 98.82% on JAFFE database, 91.27% on Cohn Kanade database and 93.05% on DeitY-TU database. Also, we have done comparative study of our proposed method with other techniques developed by other researchers. This paper will put into focus emotion-oriented systems through AU detection in future based on the located features. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometrics" title="biometrics">biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20landmarks" title=" facial landmarks"> facial landmarks</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/22182/a-geometric-based-hybrid-approach-for-facial-feature-localization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22182.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">412</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6051</span> Reduced Complexity of ML Detection Combined with DFE</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jae-Hyun%20Ro">Jae-Hyun Ro</a>, <a href="https://publications.waset.org/abstracts/search?q=Yong-Jun%20Kim"> Yong-Jun Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Chang-Bin%20Ha"> Chang-Bin Ha</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyoung-Kyu%20Song"> Hyoung-Kyu Song </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, many detection schemes have been developed to improve the error performance and to reduce the complexity. Maximum likelihood (ML) detection has optimal error performance but it has very high complexity. Thus, this paper proposes reduced complexity of ML detection combined with decision feedback equalizer (DFE). The error performance of the proposed detection scheme is higher than the conventional DFE. But the complexity of the proposed scheme is lower than the conventional ML detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=detection" title="detection">detection</a>, <a href="https://publications.waset.org/abstracts/search?q=DFE" title=" DFE"> DFE</a>, <a href="https://publications.waset.org/abstracts/search?q=MIMO-OFDM" title=" MIMO-OFDM"> MIMO-OFDM</a>, <a href="https://publications.waset.org/abstracts/search?q=ML" title=" ML"> ML</a> </p> <a href="https://publications.waset.org/abstracts/42215/reduced-complexity-of-ml-detection-combined-with-dfe" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42215.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">610</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6050</span> Theoretical Reflections on Metaphor and Cohesion and the Coherence of Face-To-Face Interactions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Afef%20Badri">Afef Badri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The role of metaphor in creating the coherence and the cohesion of discourse in online interactive talk has almost received no attention. This paper intends to provide some theoretical reflections on metaphorical coherence as a jointly constructed process that evolves in online, face-to-face interactions. It suggests that the presence of a global conceptual structure in a conversation makes it conceptually cohesive. Yet, coherence remains a process largely determined by other variables (shared goals, communicative intentions, and framework of understanding). Metaphorical coherence created by these variables can be useful in detecting bias in media reporting. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=coherence" title="coherence">coherence</a>, <a href="https://publications.waset.org/abstracts/search?q=cohesion" title=" cohesion"> cohesion</a>, <a href="https://publications.waset.org/abstracts/search?q=face-to-face%20interactions" title=" face-to-face interactions"> face-to-face interactions</a>, <a href="https://publications.waset.org/abstracts/search?q=metaphor" title=" metaphor"> metaphor</a> </p> <a href="https://publications.waset.org/abstracts/68682/theoretical-reflections-on-metaphor-and-cohesion-and-the-coherence-of-face-to-face-interactions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/68682.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">247</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6049</span> An Automatic Large Classroom Attendance Conceptual Model Using Face Counting</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sirajdin%20Olagoke%20Adeshina">Sirajdin Olagoke Adeshina</a>, <a href="https://publications.waset.org/abstracts/search?q=Haidi%20Ibrahim"> Haidi Ibrahim</a>, <a href="https://publications.waset.org/abstracts/search?q=Akeem%20Salawu"> Akeem Salawu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> large lecture theatres cannot be covered by a single camera but rather by a multicamera setup because of their size, shape, and seating arrangements. Although, classroom capture is achievable through a single camera. Therefore, a design and implementation of a multicamera setup for a large lecture hall were considered. Researchers have shown emphasis on the impact of class attendance taken on the academic performance of students. However, the traditional method of carrying out this exercise is below standard, especially for large lecture theatres, because of the student population, the time required, sophistication, exhaustiveness, and manipulative influence. An automated large classroom attendance system is, therefore, imperative. The common approach in this system is face detection and recognition, where known student faces are captured and stored for recognition purposes. This approach will require constant face database updates due to constant changes in the facial features. Alternatively, face counting can be performed by cropping the localized faces on the video or image into a folder and then count them. This research aims to develop a face localization-based approach to detect student faces in classroom images captured using a multicamera setup. A selected Haar-like feature cascade face detector trained with an asymmetric goal to minimize the False Rejection Rate (FRR) relative to the False Acceptance Rate (FAR) was applied on Raspberry Pi 4B. A relationship between the two factors (FRR and FAR) was established using a constant (λ) as a trade-off between the two factors for automatic adjustment during training. An evaluation of the proposed approach and the conventional AdaBoost on classroom datasets shows an improvement of 8% TPR (output result of low FRR) and 7% minimization of the FRR. The average learning speed of the proposed approach was improved with 1.19s execution time per image compared to 2.38s of the improved AdaBoost. Consequently, the proposed approach achieved 97% TPR with an overhead constraint time of 22.9s compared to 46.7s of the improved Adaboost when evaluated on images obtained from a large lecture hall (DK5) USM. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20attendance" title="automatic attendance">automatic attendance</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=haar-like%20cascade" title=" haar-like cascade"> haar-like cascade</a>, <a href="https://publications.waset.org/abstracts/search?q=manual%20attendance" title=" manual attendance"> manual attendance</a> </p> <a href="https://publications.waset.org/abstracts/165576/an-automatic-large-classroom-attendance-conceptual-model-using-face-counting" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165576.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">71</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=face%20detection&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=face%20detection&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=face%20detection&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=face%20detection&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=face%20detection&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=face%20detection&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=face%20detection&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=face%20detection&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=face%20detection&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=face%20detection&page=202">202</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=face%20detection&page=203">203</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=face%20detection&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>