CINXE.COM

Search results for: human activity recognition

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: human activity recognition</title> <meta name="description" content="Search results for: human activity recognition"> <meta name="keywords" content="human activity recognition"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="human activity recognition" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="human activity recognition"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 14883</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: human activity recognition</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14883</span> A Contribution to Human Activities Recognition Using Expert System Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Malika%20Yaici">Malika Yaici</a>, <a href="https://publications.waset.org/abstracts/search?q=Soraya%20Aloui"> Soraya Aloui</a>, <a href="https://publications.waset.org/abstracts/search?q=Sara%20Semchaoui"> Sara Semchaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper deals with human activity recognition from sensor data. It is an active research area, and the main objective is to obtain a high recognition rate. In this work, a recognition system based on expert systems is proposed; the recognition is performed using the objects, object states, and gestures and taking into account the context (the location of the objects and of the person performing the activity, the duration of the elementary actions and the activity). The system recognizes complex activities after decomposing them into simple, easy-to-recognize activities. The proposed method can be applied to any type of activity. The simulation results show the robustness of our system and its speed of decision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title="human activity recognition">human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=ubiquitous%20computing" title=" ubiquitous computing"> ubiquitous computing</a>, <a href="https://publications.waset.org/abstracts/search?q=context-awareness" title=" context-awareness"> context-awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=expert%20system" title=" expert system"> expert system</a> </p> <a href="https://publications.waset.org/abstracts/171721/a-contribution-to-human-activities-recognition-using-expert-system-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171721.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">118</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14882</span> Human Activities Recognition Based on Expert System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Malika%20Yaici">Malika Yaici</a>, <a href="https://publications.waset.org/abstracts/search?q=Soraya%20Aloui"> Soraya Aloui</a>, <a href="https://publications.waset.org/abstracts/search?q=Sara%20Semchaoui"> Sara Semchaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recognition of human activities from sensor data is an active research area, and the main objective is to obtain a high recognition rate. In this work, we propose a recognition system based on expert systems. The proposed system makes the recognition based on the objects, object states, and gestures, taking into account the context (the location of the objects and of the person performing the activity, the duration of the elementary actions, and the activity). This work focuses on complex activities which are decomposed into simple easy to recognize activities. The proposed method can be applied to any type of activity. The simulation results show the robustness of our system and its speed of decision. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title="human activity recognition">human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=ubiquitous%20computing" title=" ubiquitous computing"> ubiquitous computing</a>, <a href="https://publications.waset.org/abstracts/search?q=context-awareness" title=" context-awareness"> context-awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=expert%20system" title=" expert system"> expert system</a> </p> <a href="https://publications.waset.org/abstracts/151943/human-activities-recognition-based-on-expert-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14881</span> Smartphone-Based Human Activity Recognition by Machine Learning Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yanting%20Cao">Yanting Cao</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazumitsu%20Nawata"> Kazumitsu Nawata</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As smartphones upgrading, their software and hardware are getting smarter, so the smartphone-based human activity recognition will be described as more refined, complex, and detailed. In this context, we analyzed a set of experimental data obtained by observing and measuring 30 volunteers with six activities of daily living (ADL). Due to the large sample size, especially a 561-feature vector with time and frequency domain variables, cleaning these intractable features and training a proper model becomes extremely challenging. After a series of feature selection and parameters adjustment, a well-performed SVM classifier has been trained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=smart%20sensors" title="smart sensors">smart sensors</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title=" human activity recognition"> human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a> </p> <a href="https://publications.waset.org/abstracts/142359/smartphone-based-human-activity-recognition-by-machine-learning-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142359.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">143</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14880</span> Lightweight Hybrid Convolutional and Recurrent Neural Networks for Wearable Sensor Based Human Activity Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sonia%20Perez-Gamboa">Sonia Perez-Gamboa</a>, <a href="https://publications.waset.org/abstracts/search?q=Qingquan%20Sun"> Qingquan Sun</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20Zhang"> Yan Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Non-intrusive sensor-based human activity recognition (HAR) is utilized in a spectrum of applications, including fitness tracking devices, gaming, health care monitoring, and smartphone applications. Deep learning models such as convolutional neural networks (CNNs) and long short term memory (LSTM) recurrent neural networks (RNNs) provide a way to achieve HAR accurately and effectively. In this paper, we design a multi-layer hybrid architecture with CNN and LSTM and explore a variety of multi-layer combinations. Based on the exploration, we present a lightweight, hybrid, and multi-layer model, which can improve the recognition performance by integrating local features and scale-invariant with dependencies of activities. The experimental results demonstrate the efficacy of the proposed model, which can achieve a 94.7% activity recognition rate on a benchmark human activity dataset. This model outperforms traditional machine learning and other deep learning methods. Additionally, our implementation achieves a balance between recognition rate and training time consumption. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=LSTM" title=" LSTM"> LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title=" human activity recognition"> human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=inertial%20sensor" title=" inertial sensor"> inertial sensor</a> </p> <a href="https://publications.waset.org/abstracts/131782/lightweight-hybrid-convolutional-and-recurrent-neural-networks-for-wearable-sensor-based-human-activity-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/131782.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">150</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14879</span> A Human Activity Recognition System Based on Sensory Data Related to Object Usage </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Abdullah">M. Abdullah</a>, <a href="https://publications.waset.org/abstracts/search?q=Al-Wadud"> Al-Wadud</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sensor-based activity recognition systems usually accounts which sensors have been activated to perform an activity. The system then combines the conditional probabilities of those sensors to represent different activities and takes the decision based on that. However, the information about the sensors which are not activated may also be of great help in deciding which activity has been performed. This paper proposes an approach where the sensory data related to both usage and non-usage of objects are utilized to make the classification of activities. Experimental results also show the promising performance of the proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Na%C3%AFve%20Bayesian" title="Naïve Bayesian">Naïve Bayesian</a>, <a href="https://publications.waset.org/abstracts/search?q=based%20classification" title=" based classification"> based classification</a>, <a href="https://publications.waset.org/abstracts/search?q=activity%20recognition" title=" activity recognition"> activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor%20data" title=" sensor data"> sensor data</a>, <a href="https://publications.waset.org/abstracts/search?q=object-usage%20model" title=" object-usage model"> object-usage model</a> </p> <a href="https://publications.waset.org/abstracts/4112/a-human-activity-recognition-system-based-on-sensory-data-related-to-object-usage" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4112.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">322</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14878</span> Real Time Multi Person Action Recognition Using Pose Estimates</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aishrith%20Rao">Aishrith Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human activity recognition is an important aspect of video analytics, and many approaches have been recommended to enable action recognition. In this approach, the model is used to identify the action of the multiple people in the frame and classify them accordingly. A few approaches use RNNs and 3D CNNs, which are computationally expensive and cannot be trained with the small datasets which are currently available. Multi-person action recognition has been performed in order to understand the positions and action of people present in the video frame. The size of the video frame can be adjusted as a hyper-parameter depending on the hardware resources available. OpenPose has been used to calculate pose estimate using CNN to produce heap-maps, one of which provides skeleton features, which are basically joint features. The features are then extracted, and a classification algorithm can be applied to classify the action. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title="human activity recognition">human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimates" title=" pose estimates"> pose estimates</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/127872/real-time-multi-person-action-recognition-using-pose-estimates" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127872.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14877</span> Fitness Action Recognition Based on MediaPipe</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zixuan%20Xu">Zixuan Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yichun%20Lou"> Yichun Lou</a>, <a href="https://publications.waset.org/abstracts/search?q=Yang%20Song"> Yang Song</a>, <a href="https://publications.waset.org/abstracts/search?q=Zihuai%20Lin"> Zihuai Lin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> MediaPipe is an open-source machine learning computer vision framework that can be ported into a multi-platform environment, which makes it easier to use it to recognize the human activity. Based on this framework, many human recognition systems have been created, but the fundamental issue is the recognition of human behavior and posture. In this paper, two methods are proposed to recognize human gestures based on MediaPipe, the first one uses the Adaptive Boosting algorithm to recognize a series of fitness gestures, and the second one uses the Fast Dynamic Time Warping algorithm to recognize 413 continuous fitness actions. These two methods are also applicable to any human posture movement recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=MediaPipe" title=" MediaPipe"> MediaPipe</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20boosting" title=" adaptive boosting"> adaptive boosting</a>, <a href="https://publications.waset.org/abstracts/search?q=fast%20dynamic%20time%20warping" title=" fast dynamic time warping"> fast dynamic time warping</a> </p> <a href="https://publications.waset.org/abstracts/160758/fitness-action-recognition-based-on-mediapipe" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160758.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">118</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14876</span> Improving Activity Recognition Classification of Repetitious Beginner Swimming Using a 2-Step Peak/Valley Segmentation Method with Smoothing and Resampling for Machine Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Larry%20Powell">Larry Powell</a>, <a href="https://publications.waset.org/abstracts/search?q=Seth%20Polsley"> Seth Polsley</a>, <a href="https://publications.waset.org/abstracts/search?q=Drew%20Casey"> Drew Casey</a>, <a href="https://publications.waset.org/abstracts/search?q=Tracy%20Hammond"> Tracy Hammond</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human activity recognition (HAR) systems have shown positive performance when recognizing repetitive activities like walking, running, and sleeping. Water-based activities are a reasonably new area for activity recognition. However, water-based activity recognition has largely focused on supporting the elite and competitive swimming population, which already has amazing coordination and proper form. Beginner swimmers are not perfect, and activity recognition needs to support the individual motions to help beginners. Activity recognition algorithms are traditionally built around short segments of timed sensor data. Using a time window input can cause performance issues in the machine learning model. The window’s size can be too small or large, requiring careful tuning and precise data segmentation. In this work, we present a method that uses a time window as the initial segmentation, then separates the data based on the change in the sensor value. Our system uses a multi-phase segmentation method that pulls all peaks and valleys for each axis of an accelerometer placed on the swimmer’s lower back. This results in high recognition performance using leave-one-subject-out validation on our study with 20 beginner swimmers, with our model optimized from our final dataset resulting in an F-Score of 0.95. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=time%20window" title="time window">time window</a>, <a href="https://publications.waset.org/abstracts/search?q=peak%2Fvalley%20segmentation" title=" peak/valley segmentation"> peak/valley segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=beginner%20swimming" title=" beginner swimming"> beginner swimming</a>, <a href="https://publications.waset.org/abstracts/search?q=activity%20recognition" title=" activity recognition"> activity recognition</a> </p> <a href="https://publications.waset.org/abstracts/156773/improving-activity-recognition-classification-of-repetitious-beginner-swimming-using-a-2-step-peakvalley-segmentation-method-with-smoothing-and-resampling-for-machine-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156773.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">123</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14875</span> Multimodal Deep Learning for Human Activity Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ons%20Slimene">Ons Slimene</a>, <a href="https://publications.waset.org/abstracts/search?q=Aroua%20Taamallah"> Aroua Taamallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Maha%20Khemaja"> Maha Khemaja</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, human activity recognition (HAR) has been a key area of research due to its diverse applications. It has garnered increasing attention in the field of computer vision. HAR plays an important role in people’s daily lives as it has the ability to learn advanced knowledge about human activities from data. In HAR, activities are usually represented by exploiting different types of sensors, such as embedded sensors or visual sensors. However, these sensors have limitations, such as local obstacles, image-related obstacles, sensor unreliability, and consumer concerns. Recently, several deep learning-based approaches have been proposed for HAR and these approaches are classified into two categories based on the type of data used: vision-based approaches and sensor-based approaches. This research paper highlights the importance of multimodal data fusion from skeleton data obtained from videos and data generated by embedded sensors using deep neural networks for achieving HAR. We propose a deep multimodal fusion network based on a twostream architecture. These two streams use the Convolutional Neural Network combined with the Bidirectional LSTM (CNN BILSTM) to process skeleton data and data generated by embedded sensors and the fusion at the feature level is considered. The proposed model was evaluated on a public OPPORTUNITY++ dataset and produced a accuracy of 96.77%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title="human activity recognition">human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=action%20recognition" title=" action recognition"> action recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sensors" title=" sensors"> sensors</a>, <a href="https://publications.waset.org/abstracts/search?q=vision" title=" vision"> vision</a>, <a href="https://publications.waset.org/abstracts/search?q=human-centric%20sensing" title=" human-centric sensing"> human-centric sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=context-awareness" title=" context-awareness"> context-awareness</a> </p> <a href="https://publications.waset.org/abstracts/162633/multimodal-deep-learning-for-human-activity-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162633.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14874</span> Human Action Recognition Using Wavelets of Derived Beta Distributions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Neziha%20Jaouedi">Neziha Jaouedi</a>, <a href="https://publications.waset.org/abstracts/search?q=Noureddine%20Boujnah"> Noureddine Boujnah</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Salim%20Bouhlel"> Mohamed Salim Bouhlel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the framework of human machine interaction systems enhancement, we focus throw this paper on human behavior analysis and action recognition. Human behavior is characterized by actions and reactions duality (movements, psychological modification, verbal and emotional expression). It’s worth noting that many information is hidden behind gesture, sudden motion points trajectories and speeds, many research works reconstructed an information retrieval issues. In our work we will focus on motion extraction, tracking and action recognition using wavelet network approaches. Our contribution uses an analysis of human subtraction by Gaussian Mixture Model (GMM) and body movement through trajectory models of motion constructed from kalman filter. These models allow to remove the noise using the extraction of the main motion features and constitute a stable base to identify the evolutions of human activity. Each modality is used to recognize a human action using wavelets of derived beta distributions approach. The proposed approach has been validated successfully on a subset of KTH and UCF sports database. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feautures%20extraction" title="feautures extraction">feautures extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20action%20classifier" title=" human action classifier"> human action classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20neural%20network" title=" wavelet neural network"> wavelet neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=beta%20wavelet" title=" beta wavelet"> beta wavelet</a> </p> <a href="https://publications.waset.org/abstracts/79396/human-action-recognition-using-wavelets-of-derived-beta-distributions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79396.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">411</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14873</span> Relevant LMA Features for Human Motion Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Insaf%20Ajili">Insaf Ajili</a>, <a href="https://publications.waset.org/abstracts/search?q=Malik%20Mallem"> Malik Mallem</a>, <a href="https://publications.waset.org/abstracts/search?q=Jean-Yves%20Didier"> Jean-Yves Didier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Motion recognition from videos is actually a very complex task due to the high variability of motions. This paper describes the challenges of human motion recognition, especially motion representation step with relevant features. Our descriptor vector is inspired from Laban Movement Analysis method. We propose discriminative features using the Random Forest algorithm in order to remove redundant features and make learning algorithms operate faster and more effectively. We validate our method on MSRC-12 and UTKinect datasets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=discriminative%20LMA%20features" title="discriminative LMA features">discriminative LMA features</a>, <a href="https://publications.waset.org/abstracts/search?q=features%20reduction" title=" features reduction"> features reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20motion%20recognition" title=" human motion recognition"> human motion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a> </p> <a href="https://publications.waset.org/abstracts/96299/relevant-lma-features-for-human-motion-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96299.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">195</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14872</span> On the Network Packet Loss Tolerance of SVM Based Activity Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gamze%20Uslu">Gamze Uslu</a>, <a href="https://publications.waset.org/abstracts/search?q=Sebnem%20Baydere"> Sebnem Baydere</a>, <a href="https://publications.waset.org/abstracts/search?q=Alper%20K.%20Demir"> Alper K. Demir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=activity%20recognition" title="activity recognition">activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20machines" title=" support vector machines"> support vector machines</a>, <a href="https://publications.waset.org/abstracts/search?q=acceleration%20sensor" title=" acceleration sensor"> acceleration sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=wireless%20sensor%20networks" title=" wireless sensor networks"> wireless sensor networks</a>, <a href="https://publications.waset.org/abstracts/search?q=packet%20loss" title=" packet loss"> packet loss</a> </p> <a href="https://publications.waset.org/abstracts/14201/on-the-network-packet-loss-tolerance-of-svm-based-activity-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14201.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">475</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14871</span> Specified Human Motion Recognition and Unknown Hand-Held Object Tracking</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jinsiang%20Shaw">Jinsiang Shaw</a>, <a href="https://publications.waset.org/abstracts/search?q=Pik-Hoe%20Chen"> Pik-Hoe Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to integrate human recognition, motion recognition, and object tracking technologies without requiring a pre-training database model for motion recognition or the unknown object itself. Furthermore, it can simultaneously track multiple users and multiple objects. Unlike other existing human motion recognition methods, our approach employs a rule-based condition method to determine if a user hand is approaching or departing an object. It uses a background subtraction method to separate the human and object from the background, and employs behavior features to effectively interpret human object-grabbing actions. With an object’s histogram characteristics, we are able to isolate and track it using back projection. Hence, a moving object trajectory can be recorded and the object itself can be located. This particular technique can be used in a camera surveillance system in a shopping area to perform real-time intelligent surveillance, thus preventing theft. Experimental results verify the validity of the developed surveillance algorithm with an accuracy of 83% for shoplifting detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Automatic%20Tracking" title="Automatic Tracking">Automatic Tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Back%20Projection" title=" Back Projection"> Back Projection</a>, <a href="https://publications.waset.org/abstracts/search?q=Motion%20Recognition" title=" Motion Recognition"> Motion Recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Shoplifting" title=" Shoplifting"> Shoplifting</a> </p> <a href="https://publications.waset.org/abstracts/66866/specified-human-motion-recognition-and-unknown-hand-held-object-tracking" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66866.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14870</span> Vision-Based Hand Segmentation Techniques for Human-Computer Interaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Jebali">M. Jebali</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Jemni"> M. Jemni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm such as an input to another system which attempt to bring the HCI performance nearby the human-human interaction, by modeling an intelligent sign language recognition system based on prediction in the context of dialogue between the system (avatar) and the interlocutor. For the purpose of hand segmentation, an overcoming occlusion approach has been proposed for superior results for detection of hand from an image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HCI" title="HCI">HCI</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title=" sign language recognition"> sign language recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20segmentation" title=" hand segmentation"> hand segmentation</a> </p> <a href="https://publications.waset.org/abstracts/26490/vision-based-hand-segmentation-techniques-for-human-computer-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26490.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">412</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14869</span> Intelligent Human Pose Recognition Based on EMG Signal Analysis and Machine 3D Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Si%20Chen">Si Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Quanhong%20Jiang"> Quanhong Jiang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the increasingly mature posture recognition technology, human movement information is widely used in sports rehabilitation, human-computer interaction, medical health, human posture assessment, and other fields today; this project uses the most original ideas; it is proposed to use the collection equipment for the collection of myoelectric data, reflect the muscle posture change on a degree of freedom through data processing, carry out data-muscle three-dimensional model joint adjustment, and realize basic pose recognition. Based on this, bionic aids or medical rehabilitation equipment can be further developed with the help of robotic arms and cutting-edge technology, which has a bright future and unlimited development space. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pose%20recognition" title="pose recognition">pose recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20animation" title=" 3D animation"> 3D animation</a>, <a href="https://publications.waset.org/abstracts/search?q=electromyography" title=" electromyography"> electromyography</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=bionics" title=" bionics"> bionics</a> </p> <a href="https://publications.waset.org/abstracts/166827/intelligent-human-pose-recognition-based-on-emg-signal-analysis-and-machine-3d-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166827.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">79</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14868</span> Job Satisfaction among Public and Private Universities in Egypt Related to Organizational and Personal Aspects</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Reem%20Alkadeem">Reem Alkadeem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims at evaluating the overall satisfaction of faculty members and relating it to organizational and personal aspects in Egyptian public and private universities. These aspects are identified through an extensive study of all factors that might affect job satisfaction. The most influencing parameters selected are academics’ demographics, human resource management, organizational profile, workload, teamwork skills, recognition, autonomy, teaching activity, research activity, and motivation. A questionnaire of 94 questions was used to assess job satisfaction and the previously mentioned parameters. It was distributed among seven hundred members of different universities in Egypt. Two hundred and twenty-seven faculty members responded. This sample was gathered from twelve universities and The Supreme Council of Universities. The ANOVA showed a significant relationship (p < 0.05) between eight of the selected parameters and job satisfaction. These parameters are age, rank, human resource management, profile of organizational characteristics, workload, recognition, teaching activity, and motivation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=job%20satisfaction" title="job satisfaction">job satisfaction</a>, <a href="https://publications.waset.org/abstracts/search?q=higher%20education" title=" higher education"> higher education</a>, <a href="https://publications.waset.org/abstracts/search?q=organizational%20profile" title=" organizational profile"> organizational profile</a>, <a href="https://publications.waset.org/abstracts/search?q=Egyptian%20universities" title=" Egyptian universities "> Egyptian universities </a> </p> <a href="https://publications.waset.org/abstracts/19286/job-satisfaction-among-public-and-private-universities-in-egypt-related-to-organizational-and-personal-aspects" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19286.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">484</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14867</span> Preliminary Study of Human Reliability of Control in Case of Fire Based on the Decision Processes and Stress Model of Human in a Fire</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seung-Un%20Chae">Seung-Un Chae</a>, <a href="https://publications.waset.org/abstracts/search?q=Heung-Yul%20Kim"> Heung-Yul Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Sa-Kil%20Kim"> Sa-Kil Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents the findings of preliminary study on human control performance in case of fire. The relationship between human control and human decision is studied in decision processes and stress model of human in a fire. Human behavior aspects involved in the decision process during a fire incident. The decision processes appear that six of individual perceptual processes: recognition, validation, definition, evaluation, commitment, and reassessment. Then, human may be stressed in order to get an optimal decision for their activity. This paper explores problems in human control processes and stresses in a catastrophic situation. Thus, the future approach will be concerned to reduce stresses and ambiguous irrelevant information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20reliability" title="human reliability">human reliability</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20processes" title=" decision processes"> decision processes</a>, <a href="https://publications.waset.org/abstracts/search?q=stress%20model" title=" stress model"> stress model</a>, <a href="https://publications.waset.org/abstracts/search?q=fire" title=" fire"> fire</a> </p> <a href="https://publications.waset.org/abstracts/50470/preliminary-study-of-human-reliability-of-control-in-case-of-fire-based-on-the-decision-processes-and-stress-model-of-human-in-a-fire" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50470.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">986</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14866</span> Automatic Speech Recognition Systems Performance Evaluation Using Word Error Rate Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jo%C3%A3o%20Rato">João Rato</a>, <a href="https://publications.waset.org/abstracts/search?q=Nuno%20Costa"> Nuno Costa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The human verbal communication is a two-way process which requires a mutual understanding that will result in some considerations. This kind of communication, also called dialogue, besides the supposed human agents it can also be performed between human agents and machines. The interaction between Men and Machines, by means of a natural language, has an important role concerning the improvement of the communication between each other. Aiming at knowing the performance of some speech recognition systems, this document shows the results of the accomplished tests according to the Word Error Rate evaluation method. Besides that, it is also given a set of information linked to the systems of Man-Machine communication. After this work has been made, conclusions were drawn regarding the Speech Recognition Systems, among which it can be mentioned their poor performance concerning the voice interpretation in noisy environments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automatic%20speech%20recognition" title="automatic speech recognition">automatic speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=man-machine%20conversation" title=" man-machine conversation"> man-machine conversation</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20recognition" title=" speech recognition"> speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=spoken%20dialogue%20systems" title=" spoken dialogue systems"> spoken dialogue systems</a>, <a href="https://publications.waset.org/abstracts/search?q=word%20error%20rate" title=" word error rate"> word error rate</a> </p> <a href="https://publications.waset.org/abstracts/62274/automatic-speech-recognition-systems-performance-evaluation-using-word-error-rate-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62274.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">322</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14865</span> Recognizing an Individual, Their Topic of Conversation and Cultural Background from 3D Body Movement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gheida%20J.%20Shahrour">Gheida J. Shahrour</a>, <a href="https://publications.waset.org/abstracts/search?q=Martin%20J.%20Russell"> Martin J. Russell</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The 3D body movement signals captured during human-human conversation include clues not only to the content of people’s communication but also to their culture and personality. This paper is concerned with automatic extraction of this information from body movement signals. For the purpose of this research, we collected a novel corpus from 27 subjects, arranged them into groups according to their culture. We arranged each group into pairs and each pair communicated with each other about different topics. A state-of-art recognition system is applied to the problems of person, culture, and topic recognition. We borrowed modeling, classification, and normalization techniques from speech recognition. We used Gaussian Mixture Modeling (GMM) as the main technique for building our three systems, obtaining 77.78%, 55.47%, and 39.06% from the person, culture, and topic recognition systems respectively. In addition, we combined the above GMM systems with Support Vector Machines (SVM) to obtain 85.42%, 62.50%, and 40.63% accuracy for person, culture, and topic recognition respectively. Although direct comparison among these three recognition systems is difficult, it seems that our person recognition system performs best for both GMM and GMM-SVM, suggesting that inter-subject differences (i.e. subject’s personality traits) are a major source of variation. When removing these traits from culture and topic recognition systems using the Nuisance Attribute Projection (NAP) and the Intersession Variability Compensation (ISVC) techniques, we obtained 73.44% and 46.09% accuracy from culture and topic recognition systems respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=person%20recognition" title="person recognition">person recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=topic%20recognition" title=" topic recognition"> topic recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=culture%20recognition" title=" culture recognition"> culture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20body%20movement%20signals" title=" 3D body movement signals"> 3D body movement signals</a>, <a href="https://publications.waset.org/abstracts/search?q=variability%20compensation" title=" variability compensation"> variability compensation</a> </p> <a href="https://publications.waset.org/abstracts/19473/recognizing-an-individual-their-topic-of-conversation-and-cultural-background-from-3d-body-movement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19473.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">541</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14864</span> Violence Detection and Tracking on Moving Surveillance Video Using Machine Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abe%20Degale%20D.">Abe Degale D.</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Jian"> Cheng Jian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> When creating automated video surveillance systems, violent action recognition is crucial. In recent years, hand-crafted feature detectors have been the primary method for achieving violence detection, such as the recognition of fighting activity. Researchers have also looked into learning-based representational models. On benchmark datasets created especially for the detection of violent sequences in sports and movies, these methods produced good accuracy results. The Hockey dataset's videos with surveillance camera motion present challenges for these algorithms for learning discriminating features. Image recognition and human activity detection challenges have shown success with deep representation-based methods. For the purpose of detecting violent images and identifying aggressive human behaviours, this research suggested a deep representation-based model using the transfer learning idea. The results show that the suggested approach outperforms state-of-the-art accuracy levels by learning the most discriminating features, attaining 99.34% and 99.98% accuracy levels on the Hockey and Movies datasets, respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=violence%20detection" title="violence detection">violence detection</a>, <a href="https://publications.waset.org/abstracts/search?q=faster%20RCNN" title=" faster RCNN"> faster RCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning%20and" title=" transfer learning and"> transfer learning and</a>, <a href="https://publications.waset.org/abstracts/search?q=surveillance%20video" title=" surveillance video"> surveillance video</a> </p> <a href="https://publications.waset.org/abstracts/171296/violence-detection-and-tracking-on-moving-surveillance-video-using-machine-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171296.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">106</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14863</span> Investigating Activity Recognition Using 9-Axis Sensors and Filters in Wearable Devices</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jun%20Gil%20Ahn">Jun Gil Ahn</a>, <a href="https://publications.waset.org/abstracts/search?q=Jong%20Kang%20Park"> Jong Kang Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Jong%20Tae%20Kim"> Jong Tae Kim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we analyze major components of activity recognition (AR) in wearable device with 9-axis sensors and sensor fusion filters. 9-axis sensors commonly include 3-axis accelerometer, 3-axis gyroscope and 3-axis magnetometer. We chose sensor fusion filters as Kalman filter and Direction Cosine Matrix (DCM) filter. We also construct sensor fusion data from each activity sensor data and perform classification by accuracy of AR using Na&iuml;ve Bayes and SVM. According to the classification results, we observed that the DCM filter and the specific combination of the sensing axes are more effective for AR in wearable devices while classifying walking, running, ascending and descending. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accelerometer" title="accelerometer">accelerometer</a>, <a href="https://publications.waset.org/abstracts/search?q=activity%20recognition" title=" activity recognition"> activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=directiona%20cosine%20matrix%20filter" title=" directiona cosine matrix filter"> directiona cosine matrix filter</a>, <a href="https://publications.waset.org/abstracts/search?q=gyroscope" title=" gyroscope"> gyroscope</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=magnetometer" title=" magnetometer"> magnetometer</a> </p> <a href="https://publications.waset.org/abstracts/56198/investigating-activity-recognition-using-9-axis-sensors-and-filters-in-wearable-devices" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56198.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">333</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14862</span> Hand Detection and Recognition for Malay Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Noah%20A.%20Rahman">Mohd Noah A. Rahman</a>, <a href="https://publications.waset.org/abstracts/search?q=Afzaal%20H.%20Seyal"> Afzaal H. Seyal</a>, <a href="https://publications.waset.org/abstracts/search?q=Norhafilah%20Bara"> Norhafilah Bara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Developing a software application using an interface with computers and peripheral devices using gestures of human body such as hand movements keeps growing in interest. A review on this hand gesture detection and recognition based on computer vision technique remains a very challenging task. This is to provide more natural, innovative and sophisticated way of non-verbal communication, such as sign language, in human computer interaction. Nevertheless, this paper explores hand detection and hand gesture recognition applying a vision based approach. The hand detection and recognition used skin color spaces such as HSV and YCrCb are applied. However, there are limitations that are needed to be considered. Almost all of skin color space models are sensitive to quickly changing or mixed lighting circumstances. There are certain restrictions in order for the hand recognition to give better results such as the distance of user’s hand to the webcam and the posture and size of the hand. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20detection" title="hand detection">hand detection</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture" title=" hand gesture"> hand gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20recognition" title=" hand recognition"> hand recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a> </p> <a href="https://publications.waset.org/abstracts/46765/hand-detection-and-recognition-for-malay-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46765.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14861</span> Freedom of Information and Freedom of Expression</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amin%20Pashaye%20Amiri">Amin Pashaye Amiri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Freedom of information, according to which the public has a right to have access to government-held information, is largely considered as a tool for improving transparency and accountability in governments, and as a requirement of self-governance and good governance. So far, more than ninety countries have recognized citizens’ right to have access to public information. This recognition often took place through the adoption of an act referred to as “freedom of information act”, “access to public records act”, and so on. A freedom of information act typically imposes a positive obligation on a government to initially and regularly release certain public information, and also obliges it to provide individuals with information they request. Such an act usually allows governmental bodies to withhold information only when it falls within a limited number of exemptions enumerated in the act such as exemptions for protecting privacy of individuals and protecting national security. Some steps have been taken at the national and international level towards the recognition of freedom of information as a human right. Freedom of information was recognized in a few countries as a part of freedom of expression, and therefore, as a human right. Freedom of information was also recognized by some international bodies as a human right. The Inter-American Court of Human Rights ruled in 2006 that Article 13 of the American Convention on Human Rights, which concerns the human right to freedom of expression, protects the right of all people to request access to government information. The European Court of Human Rights has recently taken a considerable step towards recognizing freedom of information as a human right. However, in spite of the measures that have been taken, public access to government information is not yet widely accepted as an international human right. The paper will consider the degree to which freedom of information has been recognized as a human right, and study the possibility of widespread recognition of such a human right in the future. It will also examine the possible benefits of such recognition for the development of the human right to free expression. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=freedom%20of%20information" title="freedom of information">freedom of information</a>, <a href="https://publications.waset.org/abstracts/search?q=freedom%20of%20expression" title=" freedom of expression"> freedom of expression</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20rights" title=" human rights"> human rights</a>, <a href="https://publications.waset.org/abstracts/search?q=government%20information" title=" government information"> government information</a> </p> <a href="https://publications.waset.org/abstracts/23647/freedom-of-information-and-freedom-of-expression" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23647.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">547</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14860</span> Offline Signature Verification in Punjabi Based On SURF Features and Critical Point Matching Using HMM</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rajpal%20Kaur">Rajpal Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Pooja%20Choudhary"> Pooja Choudhary</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometrics, which refers to identifying an individual based on his or her physiological or behavioral characteristics, has the capabilities to the reliably distinguish between an authorized person and an imposter. The Signature recognition systems can categorized as offline (static) and online (dynamic). This paper presents Surf Feature based recognition of offline signatures system that is trained with low-resolution scanned signature images. The signature of a person is an important biometric attribute of a human being which can be used to authenticate human identity. However the signatures of human can be handled as an image and recognized using computer vision and HMM techniques. With modern computers, there is need to develop fast algorithms for signature recognition. There are multiple techniques are defined to signature recognition with a lot of scope of research. In this paper, (static signature) off-line signature recognition & verification using surf feature with HMM is proposed, where the signature is captured and presented to the user in an image format. Signatures are verified depended on parameters extracted from the signature using various image processing techniques. The Off-line Signature Verification and Recognition is implemented using Mat lab platform. This work has been analyzed or tested and found suitable for its purpose or result. The proposed method performs better than the other recently proposed methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=offline%20signature%20verification" title="offline signature verification">offline signature verification</a>, <a href="https://publications.waset.org/abstracts/search?q=offline%20signature%20recognition" title=" offline signature recognition"> offline signature recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=signatures" title=" signatures"> signatures</a>, <a href="https://publications.waset.org/abstracts/search?q=SURF%20features" title=" SURF features"> SURF features</a>, <a href="https://publications.waset.org/abstracts/search?q=HMM" title=" HMM "> HMM </a> </p> <a href="https://publications.waset.org/abstracts/20259/offline-signature-verification-in-punjabi-based-on-surf-features-and-critical-point-matching-using-hmm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20259.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">384</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14859</span> Influence of Human Resource Management Practices on Agricultural Employees’ Behavior</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20G.%20Abiona">B. G. Abiona</a>, <a href="https://publications.waset.org/abstracts/search?q=O.%20E.%20Fapojuwo"> O. E. Fapojuwo</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Akinlawon"> T. Akinlawon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study assessed the influence of human resource management practices on agricultural employees’ behavior. Data were collected from 75 randomly selected respondents using a well-structured questionnaire. The mean age of the employees’ was 43.2 years. Major human resource management practices that influence employees behaviors were: In-service training are given to employees on a regular basis (average value of x=3.44), management reward employees who are committed to their job (average value of x =3.41) and reward are designed to encourage wide participation and activity (average value of x=3.41). Also, major employees’ behavior include: Managers and employees’ wants to create better job performance (average value of x=3.13) and administrator provides praise and recognition for effective performance and show appreciation for special effort (average value of x=3.05). Major factors affecting employees’ behavior were: inadequate training (average value of x=2.93), inadequate local and international training (average value of x=2.87), inadequate grants for training programmes (average value of x= 2.81). A significant relationship was found between gender (χ2 = 37.204, P<0.05), educational qualification (χ2 = 59.093, P<0.05), income (r =0.122, P<0.05) and human resource management practices (r = 0.573, P< 0.05) of the respondents and employees’ behavior. Management should encourage employees who are committed to their job through awards and recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20resources%20management" title="human resources management">human resources management</a>, <a href="https://publications.waset.org/abstracts/search?q=agricultural%20employees" title=" agricultural employees"> agricultural employees</a>, <a href="https://publications.waset.org/abstracts/search?q=behaviour%20research%20institutes" title="behaviour research institutes">behaviour research institutes</a>, <a href="https://publications.waset.org/abstracts/search?q=Nigeria" title=" Nigeria"> Nigeria</a> </p> <a href="https://publications.waset.org/abstracts/45218/influence-of-human-resource-management-practices-on-agricultural-employees-behavior" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45218.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">253</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14858</span> Emotion Recognition Using Artificial Intelligence</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rahul%20Mohite">Rahul Mohite</a>, <a href="https://publications.waset.org/abstracts/search?q=Lahcen%20Ouarbya"> Lahcen Ouarbya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper focuses on the interplay between humans and computer systems and the ability of these systems to understand and respond to human emotions, including non-verbal communication. Current emotion recognition systems are based solely on either facial or verbal expressions. The limitation of these systems is that it requires large training data sets. The paper proposes a system for recognizing human emotions that combines both speech and emotion recognition. The system utilizes advanced techniques such as deep learning and image recognition to identify facial expressions and comprehend emotions. The results show that the proposed system, based on the combination of facial expression and speech, outperforms existing ones, which are based solely either on facial or verbal expressions. The proposed system detects human emotion with an accuracy of 86%, whereas the existing systems have an accuracy of 70% using verbal expression only and 76% using facial expression only. In this paper, the increasing significance and demand for facial recognition technology in emotion recognition are also discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20reputation" title="facial reputation">facial reputation</a>, <a href="https://publications.waset.org/abstracts/search?q=expression%20reputation" title=" expression reputation"> expression reputation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20gaining%20knowledge%20of" title=" deep gaining knowledge of"> deep gaining knowledge of</a>, <a href="https://publications.waset.org/abstracts/search?q=photo%20reputation" title=" photo reputation"> photo reputation</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20technology" title=" facial technology"> facial technology</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20processing" title=" sign processing"> sign processing</a>, <a href="https://publications.waset.org/abstracts/search?q=photo%20type" title=" photo type"> photo type</a> </p> <a href="https://publications.waset.org/abstracts/162386/emotion-recognition-using-artificial-intelligence" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162386.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14857</span> Fine Grained Action Recognition of Skateboarding Tricks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Frederik%20Calsius">Frederik Calsius</a>, <a href="https://publications.waset.org/abstracts/search?q=Mirela%20Popa"> Mirela Popa</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexia%20Briassouli"> Alexia Briassouli</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the field of machine learning, it is common practice to use benchmark datasets to prove the working of a method. The domain of action recognition in videos often uses datasets like Kinet-ics, Something-Something, UCF-101 and HMDB-51 to report results. Considering the properties of the datasets, there are no datasets that focus solely on very short clips (2 to 3 seconds), and on highly-similar fine-grained actions within one specific domain. This paper researches how current state-of-the-art action recognition methods perform on a dataset that consists of highly similar, fine-grained actions. To do so, a dataset of skateboarding tricks was created. The performed analysis highlights both benefits and limitations of state-of-the-art methods, while proposing future research directions in the activity recognition domain. The conducted research shows that the best results are obtained by fusing RGB data with OpenPose data for the Temporal Shift Module. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=activity%20recognition" title="activity recognition">activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=fused%20deep%20representations" title=" fused deep representations"> fused deep representations</a>, <a href="https://publications.waset.org/abstracts/search?q=fine-grained%20dataset" title=" fine-grained dataset"> fine-grained dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=temporal%20modeling" title=" temporal modeling"> temporal modeling</a> </p> <a href="https://publications.waset.org/abstracts/138954/fine-grained-action-recognition-of-skateboarding-tricks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138954.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">231</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14856</span> Face Tracking and Recognition Using Deep Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Degale%20Desta">Degale Desta</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Jian"> Cheng Jian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most important factor in identifying a person is their face. Even identical twins have their own distinct faces. As a result, identification and face recognition are needed to tell one person from another. A face recognition system is a verification tool used to establish a person's identity using biometrics. Nowadays, face recognition is a common technique used in a variety of applications, including home security systems, criminal identification, and phone unlock systems. This system is more secure because it only requires a facial image instead of other dependencies like a key or card. Face detection and face identification are the two phases that typically make up a human recognition system.The idea behind designing and creating a face recognition system using deep learning with Azure ML Python's OpenCV is explained in this paper. Face recognition is a task that can be accomplished using deep learning, and given the accuracy of this method, it appears to be a suitable approach. To show how accurate the suggested face recognition system is, experimental results are given in 98.46% accuracy using Fast-RCNN Performance of algorithms under different training conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=identification" title=" identification"> identification</a>, <a href="https://publications.waset.org/abstracts/search?q=fast-RCNN" title=" fast-RCNN"> fast-RCNN</a> </p> <a href="https://publications.waset.org/abstracts/163134/face-tracking-and-recognition-using-deep-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163134.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14855</span> Developing an AI-Driven Application for Real-Time Emotion Recognition from Human Vocal Patterns</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sayor%20Ajfar%20Aaron">Sayor Ajfar Aaron</a>, <a href="https://publications.waset.org/abstracts/search?q=Mushfiqur%20Rahman"> Mushfiqur Rahman</a>, <a href="https://publications.waset.org/abstracts/search?q=Sajjat%20Hossain%20Abir"> Sajjat Hossain Abir</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashif%20Newaz"> Ashif Newaz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study delves into the development of an artificial intelligence application designed for real-time emotion recognition from human vocal patterns. Utilizing advanced machine learning algorithms, including deep learning and neural networks, the paper highlights both the technical challenges and potential opportunities in accurately interpreting emotional cues from speech. Key findings demonstrate the critical role of diverse training datasets and the impact of ambient noise on recognition accuracy, offering insights into future directions for improving robustness and applicability in real-world scenarios. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title="artificial intelligence">artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title=" emotion recognition"> emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=vocal%20patterns" title=" vocal patterns"> vocal patterns</a> </p> <a href="https://publications.waset.org/abstracts/185846/developing-an-ai-driven-application-for-real-time-emotion-recognition-from-human-vocal-patterns" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185846.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">52</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14854</span> An Erudite Technique for Face Detection and Recognition Using Curvature Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Jagadeesh%20Kumar">S. Jagadeesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Face detection and recognition is an authoritative technology for image database management, video surveillance, and human computer interface (HCI). Face recognition is a rapidly nascent method, which has been extensively discarded in forensics such as felonious identification, tenable entree, and custodial security. This paper recommends an erudite technique using curvature analysis (CA) that has less false positives incidence, operative in different light environments and confiscates the artifacts that are introduced during image acquisition by ring correction in polar coordinate (RCP) method. This technique affronts mean and median filtering technique to remove the artifacts but it works in polar coordinate during image acquisition. Investigational fallouts for face detection and recognition confirms decent recitation even in diagonal orientation and stance variation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=curvature%20analysis" title="curvature analysis">curvature analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=ring%20correction%20in%20polar%20coordinate%20method" title=" ring correction in polar coordinate method"> ring correction in polar coordinate method</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20computer%20interaction" title=" human computer interaction"> human computer interaction</a> </p> <a href="https://publications.waset.org/abstracts/70748/an-erudite-technique-for-face-detection-and-recognition-using-curvature-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70748.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">286</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition&amp;page=496">496</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition&amp;page=497">497</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10