CINXE.COM

Search results for: human action classifier

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: human action classifier</title> <meta name="description" content="Search results for: human action classifier"> <meta name="keywords" content="human action classifier"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="human action classifier" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="human action classifier"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 10739</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: human action classifier</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10739</span> Human Action Recognition Using Wavelets of Derived Beta Distributions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Neziha%20Jaouedi">Neziha Jaouedi</a>, <a href="https://publications.waset.org/abstracts/search?q=Noureddine%20Boujnah"> Noureddine Boujnah</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Salim%20Bouhlel"> Mohamed Salim Bouhlel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the framework of human machine interaction systems enhancement, we focus throw this paper on human behavior analysis and action recognition. Human behavior is characterized by actions and reactions duality (movements, psychological modification, verbal and emotional expression). It’s worth noting that many information is hidden behind gesture, sudden motion points trajectories and speeds, many research works reconstructed an information retrieval issues. In our work we will focus on motion extraction, tracking and action recognition using wavelet network approaches. Our contribution uses an analysis of human subtraction by Gaussian Mixture Model (GMM) and body movement through trajectory models of motion constructed from kalman filter. These models allow to remove the noise using the extraction of the main motion features and constitute a stable base to identify the evolutions of human activity. Each modality is used to recognize a human action using wavelets of derived beta distributions approach. The proposed approach has been validated successfully on a subset of KTH and UCF sports database. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feautures%20extraction" title="feautures extraction">feautures extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20action%20classifier" title=" human action classifier"> human action classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=wavelet%20neural%20network" title=" wavelet neural network"> wavelet neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=beta%20wavelet" title=" beta wavelet"> beta wavelet</a> </p> <a href="https://publications.waset.org/abstracts/79396/human-action-recognition-using-wavelets-of-derived-beta-distributions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79396.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">411</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10738</span> Classification of Red, Green and Blue Values from Face Images Using k-NN Classifier to Predict the Skin or Non-Skin</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kemal%20Polat">Kemal Polat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, it has been estimated whether there is skin by using RBG values obtained from the camera and k-nearest neighbor (k-NN) classifier. The dataset used in this study has an unbalanced distribution and a linearly non-separable structure. This problem can also be called a big data problem. The Skin dataset was taken from UCI machine learning repository. As the classifier, we have used the k-NN method to handle this big data problem. For k value of k-NN classifier, we have used as 1. To train and test the k-NN classifier, 50-50% training-testing partition has been used. As the performance metrics, TP rate, FP Rate, Precision, recall, f-measure and AUC values have been used to evaluate the performance of k-NN classifier. These obtained results are as follows: 0.999, 0.001, 0.999, 0.999, 0.999, and 1,00. As can be seen from the obtained results, this proposed method could be used to predict whether the image is skin or not. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=k-NN%20classifier" title="k-NN classifier">k-NN classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20or%20non-skin%20classification" title=" skin or non-skin classification"> skin or non-skin classification</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20values" title=" RGB values"> RGB values</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/86538/classification-of-red-green-and-blue-values-from-face-images-using-k-nn-classifier-to-predict-the-skin-or-non-skin" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86538.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">248</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10737</span> The Diminished Online Persona: A Semantic Change of Chinese Classifier Mei on Weibo</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hui%20Shi">Hui Shi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigates a newly emerged usage of Chinese numeral classifier mei (枚) in the cyberspace. In modern Chinese grammar, mei as a classifier should occupy the pre-nominal position, and its valid accompanying nouns are restricted to small, flat, fragile inanimate objects rather than humans. To examine the semantic change of mei, two types of data from Weibo.com were collected. First, 500 mei-included Weibo posts constructed a corpus for analyzing this classifier's word order distribution (post-nominal or pre-nominal) as well as its accompanying nouns' semantics (inanimate or human). Second, considering that mei accompanies a remarkable number of human nouns in the first corpus, the second corpus is composed of mei-involved Weibo IDs from users located in first and third-tier cities (n=8 respectively). The findings show that in the cyber community, mei frequently classifies human-related neologisms at the archaic post-normal position. Besides, the 23 to 29-year-old females as well as Weibo users from third-tier cities are the major populations who adopt mei in their user IDs for self-description and identity expression. This paper argues that the creative usage of mei gains popularity in the Chinese internet due to a humor effect. The marked word order switch and semantic misapplication combined to trigger incongruity and jocularity. This study has significance for research on Chinese cyber neologism. It may also lay a foundation for further studies on Chinese classifier change and Chinese internet communication. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chinese%20classifier" title="Chinese classifier">Chinese classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=humor" title=" humor"> humor</a>, <a href="https://publications.waset.org/abstracts/search?q=neologism" title=" neologism"> neologism</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20change" title=" semantic change"> semantic change</a> </p> <a href="https://publications.waset.org/abstracts/95249/the-diminished-online-persona-a-semantic-change-of-chinese-classifier-mei-on-weibo" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95249.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">253</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10736</span> Iqbal&#039;s Philosophy of Action in the Light of Contemporary Philosophy of Action</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sevcan%20Ozturk">Sevcan Ozturk</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this paper is to analyze the twentieth-century Muslim philosopher Muhammad Iqbal’s philosophy of action in the light of the main issues of contemporary philosophy of action. Iqbal is one of the most celebrated and eminent figures of modern Islamic thought. However, a review of the works on Iqbal shows that most of the central concepts of his philosophy have not received enough attention. His notion of ‘action’ in its philosophical context is one of these concepts. One of the main characteristics of Iqbal’s approach is that he develops his discussion around the main themes of contemporary philosophy of action, which includes ontological and conceptual questions regarding the nature of human actions. He also discusses that action is the only way to develop human personality, and that the human being can only achieve immortality promised by Islam through his actions. Therefore, while presenting an approach that can be read in the light of contemporary philosophy of action, which has become one of the significant parts of modern philosophical discussions in the west particularly since the nineteenth century, he, at the same time, develops his own philosophy of action in the light of Islamic resources. Consequently, these two main characteristics of his discussion of the notion of action make his philosophy of action an important contribution to contemporary philosophy of action, a field that ignores the discussions of Muslim philosophers on action. Therefore, this paper aims at highlighting Iqbal’s contribution to the modern debate of action by analysing Iqbal’s notion of action in the light of the contemporary issues of philosophy of action. This will, first of all, include an examination of contemporary action theory. Although the main discussions of contemporary philosophy of action will provide the methodology of this study, the main paradigms of Iqbal’s approach to the notion of action will also be considered during the examination of the discussions of philosophy of action. Then, Iqbal’s own philosophy of action will be established in the light of the contemporary philosophy of action. It is hoped that this paper will cultivate a dialogue between Iqbal scholars and those working in the field of philosophy of action, and that it will be a contribution to the fields of Iqbal studies, philosophy of action, and intercultural philosophy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=action" title="action">action</a>, <a href="https://publications.waset.org/abstracts/search?q=development%20of%20personality" title=" development of personality"> development of personality</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Iqbal" title=" Muhammad Iqbal"> Muhammad Iqbal</a>, <a href="https://publications.waset.org/abstracts/search?q=philosophy%20of%20action" title=" philosophy of action"> philosophy of action</a> </p> <a href="https://publications.waset.org/abstracts/64781/iqbals-philosophy-of-action-in-the-light-of-contemporary-philosophy-of-action" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64781.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">371</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10735</span> Assessing the Validity of Human Intention for Action: Exploring Unintentional Actions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fakhrul%20Abedin%20Tanvir">Fakhrul Abedin Tanvir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper examines the validity of human intention for action, specifically focusing on unintentional actions that are unaffected by bias. Through the observation of a substantial number of individuals, estimated to be over 100, we investigate the power of human actions and their corresponding intentions. Given the underlying similarities in general thought processes and intentions among humans, it becomes possible to establish common patterns by observing a significant sample size. While this research provides observational results indicating a one-second validity of human intentions, it is important to note that these findings have not been scientifically proven. Nevertheless, this study contributes to the ongoing discourse by shedding light on participant expressions and experiences, furthering our understanding of human intentionality and action. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20intention" title="human intention">human intention</a>, <a href="https://publications.waset.org/abstracts/search?q=bias" title=" bias"> bias</a>, <a href="https://publications.waset.org/abstracts/search?q=observation" title=" observation"> observation</a>, <a href="https://publications.waset.org/abstracts/search?q=validity" title=" validity"> validity</a> </p> <a href="https://publications.waset.org/abstracts/169070/assessing-the-validity-of-human-intention-for-action-exploring-unintentional-actions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169070.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10734</span> Parkinson’s Disease Detection Analysis through Machine Learning Approaches</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Muhtasim%20Shafi%20Kader">Muhtasim Shafi Kader</a>, <a href="https://publications.waset.org/abstracts/search?q=Fizar%20Ahmed"> Fizar Ahmed</a>, <a href="https://publications.waset.org/abstracts/search?q=Annesha%20Acharjee"> Annesha Acharjee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Machine learning and data mining are crucial in health care, as well as medical information and detection. Machine learning approaches are now being utilized to improve awareness of a variety of critical health issues, including diabetes detection, neuron cell tumor diagnosis, COVID 19 identification, and so on. Parkinson’s disease is basically a disease for our senior citizens in Bangladesh. Parkinson's Disease indications often seem progressive and get worst with time. People got affected trouble walking and communicating with the condition advances. Patients can also have psychological and social vagaries, nap problems, hopelessness, reminiscence loss, and weariness. Parkinson's disease can happen in both men and women. Though men are affected by the illness at a proportion that is around partial of them are women. In this research, we have to get out the accurate ML algorithm to find out the disease with a predictable dataset and the model of the following machine learning classifiers. Therefore, nine ML classifiers are secondhand to portion study to use machine learning approaches like as follows, Naive Bayes, Adaptive Boosting, Bagging Classifier, Decision Tree Classifier, Random Forest classifier, XBG Classifier, K Nearest Neighbor Classifier, Support Vector Machine Classifier, and Gradient Boosting Classifier are used. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=naive%20bayes" title="naive bayes">naive bayes</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20boosting" title=" adaptive boosting"> adaptive boosting</a>, <a href="https://publications.waset.org/abstracts/search?q=bagging%20classifier" title=" bagging classifier"> bagging classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=decision%20tree%20classifier" title=" decision tree classifier"> decision tree classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest%20classifier" title=" random forest classifier"> random forest classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=XBG%20classifier" title=" XBG classifier"> XBG classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=k%20nearest%20neighbor%20classifier" title=" k nearest neighbor classifier"> k nearest neighbor classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=support%20vector%20classifier" title=" support vector classifier"> support vector classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=gradient%20boosting%20classifier" title=" gradient boosting classifier"> gradient boosting classifier</a> </p> <a href="https://publications.waset.org/abstracts/148163/parkinsons-disease-detection-analysis-through-machine-learning-approaches" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148163.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10733</span> Use of Fractal Geometry in Machine Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fuad%20M.%20Alkoot">Fuad M. Alkoot</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main component of a machine learning system is the classifier. Classifiers are mathematical models that can perform classification tasks for a specific application area. Additionally, many classifiers are combined using any of the available methods to reduce the classifier error rate. The benefits gained from the combination of multiple classifier designs has motivated the development of diverse approaches to multiple classifiers. We aim to investigate using fractal geometry to develop an improved classifier combiner. Initially we experiment with measuring the fractal dimension of data and use the results in the development of a combiner strategy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fractal%20geometry" title="fractal geometry">fractal geometry</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=classifier" title=" classifier"> classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=fractal%20dimension" title=" fractal dimension"> fractal dimension</a> </p> <a href="https://publications.waset.org/abstracts/141274/use-of-fractal-geometry-in-machine-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141274.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">216</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10732</span> Real Time Multi Person Action Recognition Using Pose Estimates</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aishrith%20Rao">Aishrith Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human activity recognition is an important aspect of video analytics, and many approaches have been recommended to enable action recognition. In this approach, the model is used to identify the action of the multiple people in the frame and classify them accordingly. A few approaches use RNNs and 3D CNNs, which are computationally expensive and cannot be trained with the small datasets which are currently available. Multi-person action recognition has been performed in order to understand the positions and action of people present in the video frame. The size of the video frame can be adjusted as a hyper-parameter depending on the hardware resources available. OpenPose has been used to calculate pose estimate using CNN to produce heap-maps, one of which provides skeleton features, which are basically joint features. The features are then extracted, and a classification algorithm can be applied to classify the action. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title="human activity recognition">human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimates" title=" pose estimates"> pose estimates</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/127872/real-time-multi-person-action-recognition-using-pose-estimates" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127872.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10731</span> Speaker Recognition Using LIRA Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nestor%20A.%20Garcia%20Fragoso">Nestor A. Garcia Fragoso</a>, <a href="https://publications.waset.org/abstracts/search?q=Tetyana%20Baydyk"> Tetyana Baydyk</a>, <a href="https://publications.waset.org/abstracts/search?q=Ernst%20Kussul"> Ernst Kussul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article contains information from our investigation in the field of voice recognition. For this purpose, we created a voice database that contains different phrases in two languages, English and Spanish, for men and women. As a classifier, the LIRA (Limited Receptive Area) grayscale neural classifier was selected. The LIRA grayscale neural classifier was developed for image recognition tasks and demonstrated good results. Therefore, we decided to develop a recognition system using this classifier for voice recognition. From a specific set of speakers, we can recognize the speaker&rsquo;s voice. For this purpose, the system uses spectrograms of the voice signals as input to the system, extracts the characteristics and identifies the speaker. The results are described and analyzed in this article. The classifier can be used for speaker identification in security system or smart buildings for different types of intelligent devices. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=extreme%20learning" title="extreme learning">extreme learning</a>, <a href="https://publications.waset.org/abstracts/search?q=LIRA%20neural%20classifier" title=" LIRA neural classifier"> LIRA neural classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=speaker%20identification" title=" speaker identification"> speaker identification</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20recognition" title=" voice recognition"> voice recognition</a> </p> <a href="https://publications.waset.org/abstracts/112384/speaker-recognition-using-lira-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112384.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10730</span> Application of Smplify-X Algorithm with Enhanced Gender Classifier in 3D Human Pose Estimation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiahe%20Liu">Jiahe Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongyang%20Yu"> Hongyang Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Miao%20Luo"> Miao Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Feng%20Qian"> Feng Qian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The widespread application of 3D human body reconstruction spans various fields. Smplify-X, an algorithm reliant on single-image input, employs three distinct body parameter templates, necessitating gender classification of individuals within the input image. Researchers employed a ResNet18 network to train a gender classifier within the Smplify-X framework, setting the threshold at 0.9, designating images falling below this threshold as having neutral gender. This model achieved 62.38% accurate predictions and 7.54% incorrect predictions. Our improvement involved refining the MobileNet network, resulting in a raised threshold of 0.97. Consequently, we attained 78.89% accurate predictions and a mere 0.2% incorrect predictions, markedly enhancing prediction precision and enabling more precise 3D human body reconstruction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SMPLX" title="SMPLX">SMPLX</a>, <a href="https://publications.waset.org/abstracts/search?q=mobileNet" title=" mobileNet"> mobileNet</a>, <a href="https://publications.waset.org/abstracts/search?q=gender%20classification" title=" gender classification"> gender classification</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20human%20reconstruction" title=" 3D human reconstruction"> 3D human reconstruction</a> </p> <a href="https://publications.waset.org/abstracts/183520/application-of-smplify-x-algorithm-with-enhanced-gender-classifier-in-3d-human-pose-estimation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183520.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10729</span> Comparing SVM and Naïve Bayes Classifier for Automatic Microaneurysm Detections </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Sopharak">A. Sopharak</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20Uyyanonvara"> B. Uyyanonvara</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Barman"> S. Barman </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Diabetic retinopathy is characterized by the development of retinal microaneurysms. The damage can be prevented if disease is treated in its early stages. In this paper, we are comparing Support Vector Machine (SVM) and Naïve Bayes (NB) classifiers for automatic microaneurysm detection in images acquired through non-dilated pupils. The Nearest Neighbor classifier is used as a baseline for comparison. Detected microaneurysms are validated with expert ophthalmologists’ hand-drawn ground-truths. The sensitivity, specificity, precision and accuracy of each method are also compared. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=diabetic%20retinopathy" title="diabetic retinopathy">diabetic retinopathy</a>, <a href="https://publications.waset.org/abstracts/search?q=microaneurysm" title=" microaneurysm"> microaneurysm</a>, <a href="https://publications.waset.org/abstracts/search?q=naive%20Bayes%20classifier" title=" naive Bayes classifier"> naive Bayes classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM%20classifier" title=" SVM classifier"> SVM classifier</a> </p> <a href="https://publications.waset.org/abstracts/3939/comparing-svm-and-naive-bayes-classifier-for-automatic-microaneurysm-detections" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3939.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">328</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10728</span> Multi-Sensor Target Tracking Using Ensemble Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bhekisipho%20Twala">Bhekisipho Twala</a>, <a href="https://publications.waset.org/abstracts/search?q=Mantepu%20Masetshaba"> Mantepu Masetshaba</a>, <a href="https://publications.waset.org/abstracts/search?q=Ramapulana%20Nkoana"> Ramapulana Nkoana</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multiple classifier systems combine several individual classifiers to deliver a final classification decision. However, an increasingly controversial question is whether such systems can outperform the single best classifier, and if so, what form of multiple classifiers system yields the most significant benefit. Also, multi-target tracking detection using multiple sensors is an important research field in mobile techniques and military applications. In this paper, several multiple classifiers systems are evaluated in terms of their ability to predict a system’s failure or success for multi-sensor target tracking tasks. The Bristol Eden project dataset is utilised for this task. Experimental and simulation results show that the human activity identification system can fulfill requirements of target tracking due to improved sensors classification performances with multiple classifier systems constructed using boosting achieving higher accuracy rates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=single%20classifier" title="single classifier">single classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=ensemble%20learning" title=" ensemble learning"> ensemble learning</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-target%20tracking" title=" multi-target tracking"> multi-target tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple%20classifiers" title=" multiple classifiers"> multiple classifiers</a> </p> <a href="https://publications.waset.org/abstracts/140816/multi-sensor-target-tracking-using-ensemble-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/140816.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">268</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10727</span> Measuring Multi-Class Linear Classifier for Image Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatma%20Susilawati%20Mohamad">Fatma Susilawati Mohamad</a>, <a href="https://publications.waset.org/abstracts/search?q=Azizah%20Abdul%20Manaf"> Azizah Abdul Manaf</a>, <a href="https://publications.waset.org/abstracts/search?q=Fadhillah%20Ahmad"> Fadhillah Ahmad</a>, <a href="https://publications.waset.org/abstracts/search?q=Zarina%20Mohamad"> Zarina Mohamad</a>, <a href="https://publications.waset.org/abstracts/search?q=Wan%20Suryani%20Wan%20Awang"> Wan Suryani Wan Awang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A simple and robust multi-class linear classifier is proposed and implemented. For a pair of classes of the linear boundary, a collection of segments of hyper planes created as perpendicular bisectors of line segments linking centroids of the classes or part of classes. Nearest Neighbor and Linear Discriminant Analysis are compared in the experiments to see the performances of each classifier in discriminating ripeness of oil palm. This paper proposes a multi-class linear classifier using Linear Discriminant Analysis (LDA) for image identification. Result proves that LDA is well capable in separating multi-class features for ripeness identification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-class" title="multi-class">multi-class</a>, <a href="https://publications.waset.org/abstracts/search?q=linear%20classifier" title=" linear classifier"> linear classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=nearest%20neighbor" title=" nearest neighbor"> nearest neighbor</a>, <a href="https://publications.waset.org/abstracts/search?q=linear%20discriminant%20analysis" title=" linear discriminant analysis"> linear discriminant analysis</a> </p> <a href="https://publications.waset.org/abstracts/51310/measuring-multi-class-linear-classifier-for-image-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51310.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">538</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10726</span> Using Machine Learning to Build a Real-Time COVID-19 Mask Safety Monitor</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yash%20Jain">Yash Jain</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The US Center for Disease Control has recommended wearing masks to slow the spread of the virus. The research uses a video feed from a camera to conduct real-time classifications of whether or not a human is correctly wearing a mask, incorrectly wearing a mask, or not wearing a mask at all. Utilizing two distinct datasets from the open-source website Kaggle, a mask detection network had been trained. The first dataset that was used to train the model was titled 'Face Mask Detection' on Kaggle, where the dataset was retrieved from and the second dataset was titled 'Face Mask Dataset, which provided the data in a (YOLO Format)' so that the TinyYoloV3 model could be trained. Based on the data from Kaggle, two machine learning models were implemented and trained: a Tiny YoloV3 Real-time model and a two-stage neural network classifier. The two-stage neural network classifier had a first step of identifying distinct faces within the image, and the second step was a classifier to detect the state of the mask on the face and whether it was worn correctly, incorrectly, or no mask at all. The TinyYoloV3 was used for the live feed as well as for a comparison standpoint against the previous two-stage classifier and was trained using the darknet neural network framework. The two-stage classifier attained a mean average precision (MAP) of 80%, while the model trained using TinyYoloV3 real-time detection had a mean average precision (MAP) of 59%. Overall, both models were able to correctly classify stages/scenarios of no mask, mask, and incorrectly worn masks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=datasets" title="datasets">datasets</a>, <a href="https://publications.waset.org/abstracts/search?q=classifier" title=" classifier"> classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=mask-detection" title=" mask-detection"> mask-detection</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time" title=" real-time"> real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=TinyYoloV3" title=" TinyYoloV3"> TinyYoloV3</a>, <a href="https://publications.waset.org/abstracts/search?q=two-stage%20neural%20network%20classifier" title=" two-stage neural network classifier"> two-stage neural network classifier</a> </p> <a href="https://publications.waset.org/abstracts/137207/using-machine-learning-to-build-a-real-time-covid-19-mask-safety-monitor" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137207.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">161</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10725</span> Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wanhyun%20Cho">Wanhyun Cho</a>, <a href="https://publications.waset.org/abstracts/search?q=Soonja%20Kang"> Soonja Kang</a>, <a href="https://publications.waset.org/abstracts/search?q=Sangkyoon%20Kim"> Sangkyoon Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Soonyoung%20Park"> Soonyoung Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20action%20recognition" title="human action recognition">human action recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Bayesian%20HMM" title=" Bayesian HMM"> Bayesian HMM</a>, <a href="https://publications.waset.org/abstracts/search?q=Dirichlet%20process%20mixture%20model" title=" Dirichlet process mixture model"> Dirichlet process mixture model</a>, <a href="https://publications.waset.org/abstracts/search?q=Gaussian-Wishart%20emission%20model" title=" Gaussian-Wishart emission model"> Gaussian-Wishart emission model</a>, <a href="https://publications.waset.org/abstracts/search?q=Variational%20Bayesian%20inference" title=" Variational Bayesian inference"> Variational Bayesian inference</a>, <a href="https://publications.waset.org/abstracts/search?q=prior%20distribution%20and%20approximate%20posterior%20distribution" title=" prior distribution and approximate posterior distribution"> prior distribution and approximate posterior distribution</a>, <a href="https://publications.waset.org/abstracts/search?q=KTH%20dataset" title=" KTH dataset"> KTH dataset</a> </p> <a href="https://publications.waset.org/abstracts/49713/human-action-recognition-using-variational-bayesian-hmm-with-dirichlet-process-mixture-of-gaussian-wishart-emission-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/49713.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">353</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10724</span> Aristotle&#039;s Notion of Akratic Action through the Prism of Moral Psychology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manik%20Konch">Manik Konch</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Actions are generally evaluated from moral point of view. Either the action is praised or condemned, but in all cases it involves the agent who performs it. The agent is held morally responsible for bringing out an action. This paper is an attempt to explore the Aristotle’s notion of action and its relation with moral development in response to modern philosophical moral psychology. Particularly, the distinction between voluntary, involuntary, and non-voluntary action in the Nicomachean Ethics with some basic problems from the perspective of moral psychology: the role of choice, moral responsibility, desire, and akrasia for an action. How to do a morally right action? Is there any role of virtue, character to do a moral action? These problems are analyzed and interpreted in order to show that the Aristotelian theory of action significantly contributes to the philosophical study of moral psychology. In this connection, the paper juxtaposes Aristotle’s theory of action with response from David Charles, John R. Searle’s, and Alfred Mele theorization of action in the mechanism of human moral behaviours. To achieve this addressed problem, we consider, how the recent moral philosophical moral psychology research can shed light on Aristotle's ethics by focusing on theory of action. In this connection, we argue that the desire is the only responsible for the akratic action. According to Aristotle, desire is primary source of action and it is the starting point of action and also the endpoint of an action. Therefore we are trying to see how desire can make a person incontinent and motivate to do such irrational actions. Is there any causes which we can say such actions are right or wrong? To measure an action we have need to see the consequences such act. Thus, we discuss the relationship between akrasia and action from the perspective of contemporary moral psychologists and philosophers whose are currently working on it. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=action" title="action">action</a>, <a href="https://publications.waset.org/abstracts/search?q=desire" title=" desire"> desire</a>, <a href="https://publications.waset.org/abstracts/search?q=moral%20psychology" title=" moral psychology"> moral psychology</a>, <a href="https://publications.waset.org/abstracts/search?q=Aristotle" title=" Aristotle"> Aristotle</a> </p> <a href="https://publications.waset.org/abstracts/55260/aristotles-notion-of-akratic-action-through-the-prism-of-moral-psychology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/55260.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">260</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10723</span> HRV Analysis Based Arrhythmic Beat Detection Using kNN Classifier</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Onder%20Yakut">Onder Yakut</a>, <a href="https://publications.waset.org/abstracts/search?q=Oguzhan%20Timus"> Oguzhan Timus</a>, <a href="https://publications.waset.org/abstracts/search?q=Emine%20Dogru%20Bolat"> Emine Dogru Bolat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Health diseases have a vital significance affecting human being's life and life quality. Sudden death events can be prevented owing to early diagnosis and treatment methods. Electrical signals, taken from the human being's body using non-invasive methods and showing the heart activity is called Electrocardiogram (ECG). The ECG signal is used for following daily activity of the heart by clinicians. Heart Rate Variability (HRV) is a physiological parameter giving the variation between the heart beats. ECG data taken from MITBIH Arrhythmia Database is used in the model employed in this study. The detection of arrhythmic heart beats is aimed utilizing the features extracted from the HRV time domain parameters. The developed model provides a satisfactory performance with ~89% accuracy, 91.7 % sensitivity and 85% specificity rates for the detection of arrhythmic beats. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=arrhythmic%20beat%20detection" title="arrhythmic beat detection">arrhythmic beat detection</a>, <a href="https://publications.waset.org/abstracts/search?q=ECG" title=" ECG"> ECG</a>, <a href="https://publications.waset.org/abstracts/search?q=HRV" title=" HRV"> HRV</a>, <a href="https://publications.waset.org/abstracts/search?q=kNN%20classifier" title=" kNN classifier"> kNN classifier</a> </p> <a href="https://publications.waset.org/abstracts/41219/hrv-analysis-based-arrhythmic-beat-detection-using-knn-classifier" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41219.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">352</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10722</span> Human Action Retrieval System Using Features Weight Updating Based Relevance Feedback Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Munaf%20Rashid">Munaf Rashid</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For content-based human action retrieval systems, search accuracy is often inferior because of the following two reasons 1) global information pertaining to videos is totally ignored, only low level motion descriptors are considered as a significant feature to match the similarity between query and database videos, and 2) the semantic gap between the high level user concept and low level visual features. Hence, in this paper, we propose a method that will address these two issues and in doing so, this paper contributes in two ways. Firstly, we introduce a method that uses both global and local information in one framework for an action retrieval task. Secondly, to minimize the semantic gap, a user concept is involved by incorporating features weight updating (FWU) Relevance Feedback (RF) approach. We use statistical characteristics to dynamically update weights of the feature descriptors so that after every RF iteration feature space is modified accordingly. For testing and validation purpose two human action recognition datasets have been utilized, namely Weizmann and UCF. Results show that even with a number of visual challenges the proposed approach performs well. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=relevance%20feedback%20%28RF%29" title="relevance feedback (RF)">relevance feedback (RF)</a>, <a href="https://publications.waset.org/abstracts/search?q=action%20retrieval" title=" action retrieval"> action retrieval</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20gap" title=" semantic gap"> semantic gap</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20descriptor" title=" feature descriptor"> feature descriptor</a>, <a href="https://publications.waset.org/abstracts/search?q=codebook" title=" codebook"> codebook</a> </p> <a href="https://publications.waset.org/abstracts/41740/human-action-retrieval-system-using-features-weight-updating-based-relevance-feedback-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41740.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">472</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10721</span> Classifier for Liver Ultrasound Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Soumya%20Sajjan">Soumya Sajjan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Liver cancer is the most common cancer disease worldwide in men and women, and is one of the few cancers still on the rise. Liver disease is the 4th leading cause of death. According to new NHS (National Health Service) figures, deaths from liver diseases have reached record levels, rising by 25% in less than a decade; heavy drinking, obesity, and hepatitis are believed to be behind the rise. In this study, we focus on Development of Diagnostic Classifier for Ultrasound liver lesion. Ultrasound (US) Sonography is an easy-to-use and widely popular imaging modality because of its ability to visualize many human soft tissues/organs without any harmful effect. This paper will provide an overview of underlying concepts, along with algorithms for processing of liver ultrasound images Naturaly, Ultrasound liver lesion images are having more spackle noise. Developing classifier for ultrasound liver lesion image is a challenging task. We approach fully automatic machine learning system for developing this classifier. First, we segment the liver image by calculating the textural features from co-occurrence matrix and run length method. For classification, Support Vector Machine is used based on the risk bounds of statistical learning theory. The textural features for different features methods are given as input to the SVM individually. Performance analysis train and test datasets carried out separately using SVM Model. Whenever an ultrasonic liver lesion image is given to the SVM classifier system, the features are calculated, classified, as normal and diseased liver lesion. We hope the result will be helpful to the physician to identify the liver cancer in non-invasive method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=segmentation" title="segmentation">segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=Support%20Vector%20Machine" title=" Support Vector Machine"> Support Vector Machine</a>, <a href="https://publications.waset.org/abstracts/search?q=ultrasound%20liver%20lesion" title=" ultrasound liver lesion"> ultrasound liver lesion</a>, <a href="https://publications.waset.org/abstracts/search?q=co-occurance%20Matrix" title=" co-occurance Matrix"> co-occurance Matrix</a> </p> <a href="https://publications.waset.org/abstracts/10244/classifier-for-liver-ultrasound-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/10244.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">411</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10720</span> A Comparative Study of k-NN and MLP-NN Classifiers Using GA-kNN Based Feature Selection Method for Wood Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Uswah%20Khairuddin">Uswah Khairuddin</a>, <a href="https://publications.waset.org/abstracts/search?q=Rubiyah%20Yusof"> Rubiyah Yusof</a>, <a href="https://publications.waset.org/abstracts/search?q=Nenny%20Ruthfalydia%20Rosli"> Nenny Ruthfalydia Rosli</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a comparative study between k-Nearest Neighbour (k-NN) and Multi-Layer Perceptron Neural Network (MLP-NN) classifier using Genetic Algorithm (GA) as feature selector for wood recognition system. The features have been extracted from the images using Grey Level Co-Occurrence Matrix (GLCM). The use of GA based feature selection is mainly to ensure that the database used for training the features for the wood species pattern classifier consists of only optimized features. The feature selection process is aimed at selecting only the most discriminating features of the wood species to reduce the confusion for the pattern classifier. This feature selection approach maintains the ‘good’ features that minimizes the inter-class distance and maximizes the intra-class distance. Wrapper GA is used with k-NN classifier as fitness evaluator (GA-kNN). The results shows that k-NN is the best choice of classifier because it uses a very simple distance calculation algorithm and classification tasks can be done in a short time with good classification accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20selection" title="feature selection">feature selection</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=optimization" title=" optimization"> optimization</a>, <a href="https://publications.waset.org/abstracts/search?q=wood%20recognition%20system" title=" wood recognition system "> wood recognition system </a> </p> <a href="https://publications.waset.org/abstracts/25573/a-comparative-study-of-k-nn-and-mlp-nn-classifiers-using-ga-knn-based-feature-selection-method-for-wood-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25573.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">545</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10719</span> Novel Inference Algorithm for Gaussian Process Classification Model with Multiclass and Its Application to Human Action Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wanhyun%20Cho">Wanhyun Cho</a>, <a href="https://publications.waset.org/abstracts/search?q=Soonja%20Kang"> Soonja Kang</a>, <a href="https://publications.waset.org/abstracts/search?q=Sangkyoon%20Kim"> Sangkyoon Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Soonyoung%20Park"> Soonyoung Park</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a novel inference algorithm for the multi-class Gaussian process classification model that can be used in the field of human behavior recognition. This algorithm can drive simultaneously both a posterior distribution of a latent function and estimators of hyper-parameters in a Gaussian process classification model with multi-class. Our algorithm is based on the Laplace approximation (LA) technique and variational EM framework. This is performed in two steps: called expectation and maximization steps. First, in the expectation step, using the Bayesian formula and LA technique, we derive approximately the posterior distribution of the latent function indicating the possibility that each observation belongs to a certain class in the Gaussian process classification model. Second, in the maximization step, using a derived posterior distribution of latent function, we compute the maximum likelihood estimator for hyper-parameters of a covariance matrix necessary to define prior distribution for latent function. These two steps iteratively repeat until a convergence condition satisfies. Moreover, we apply the proposed algorithm with human action classification problem using a public database, namely, the KTH human action data set. Experimental results reveal that the proposed algorithm shows good performance on this data set. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bayesian%20rule" title="bayesian rule">bayesian rule</a>, <a href="https://publications.waset.org/abstracts/search?q=gaussian%20process%20classification%20model%20with%20multiclass" title=" gaussian process classification model with multiclass"> gaussian process classification model with multiclass</a>, <a href="https://publications.waset.org/abstracts/search?q=gaussian%20process%20prior" title=" gaussian process prior"> gaussian process prior</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20action%20classification" title=" human action classification"> human action classification</a>, <a href="https://publications.waset.org/abstracts/search?q=laplace%20approximation" title=" laplace approximation"> laplace approximation</a>, <a href="https://publications.waset.org/abstracts/search?q=variational%20EM%20algorithm" title=" variational EM algorithm"> variational EM algorithm</a> </p> <a href="https://publications.waset.org/abstracts/34103/novel-inference-algorithm-for-gaussian-process-classification-model-with-multiclass-and-its-application-to-human-action-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/34103.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">334</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10718</span> Technology Impact on the Challenge between Human Rights and Cyber Terrorism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abanoub%20Zare%20Zakaria%20Herzalla">Abanoub Zare Zakaria Herzalla</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The link between terrorism and human rights has become a major challenge in the fight against terrorism around the world. This is based on the fact that terrorism and human rights are so closely linked that when the former starts, the latter are violated. This direct connection was recognized in the Vienna Declaration and Program of Action adopted by the World Conference on Human Rights in Vienna on June 25, 1993, which recognizes that acts of terrorism in all their forms and manifestations aim to destroy the human rights of people. Terrorism therefore represents an attack on our most basic human rights. To this end, the first part of this article focuses on the connections between terrorism and human rights and seeks to highlight the interdependence between these two concepts. The second part discusses the emerging concept of cyberterrorism and its manifestations. An analysis of the fight against cyberterrorism in the context of human rights is also carried out. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sustainable%20development" title="sustainable development">sustainable development</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20rights" title=" human rights"> human rights</a>, <a href="https://publications.waset.org/abstracts/search?q=the%20right%20to%20development" title=" the right to development"> the right to development</a>, <a href="https://publications.waset.org/abstracts/search?q=the%20human%20rights-based%20approach%20to%20development" title=" the human rights-based approach to development"> the human rights-based approach to development</a>, <a href="https://publications.waset.org/abstracts/search?q=environmental%20rights" title=" environmental rights"> environmental rights</a>, <a href="https://publications.waset.org/abstracts/search?q=economic%20development" title=" economic development"> economic development</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20sustainability%20human%20rights%20protection" title=" social sustainability human rights protection"> social sustainability human rights protection</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20rights%20violations" title=" human rights violations"> human rights violations</a>, <a href="https://publications.waset.org/abstracts/search?q=workers%E2%80%99%20rights" title=" workers’ rights"> workers’ rights</a>, <a href="https://publications.waset.org/abstracts/search?q=justice" title=" justice"> justice</a>, <a href="https://publications.waset.org/abstracts/search?q=security." title=" security."> security.</a> </p> <a href="https://publications.waset.org/abstracts/186036/technology-impact-on-the-challenge-between-human-rights-and-cyber-terrorism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186036.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">47</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10717</span> The Impact of Artificial Intelligence on Human Rights Legislations and Evolution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shenouda%20Farag%20Aziz%20Ibrahim">Shenouda Farag Aziz Ibrahim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The relationship between terrorism and human rights has become an important issue in the fight against terrorism worldwide. This is based on the fact that terrorism and human rights are closely linked, so that when the former begins, the latter suffers. This direct link was recognized in the Vienna Declaration and Program of Action adopted by the International Conference on Human Rights held in Vienna on 25 June 1993, which recognized that terrorist acts aim to violate human rights in all their forms and manifestations. . Therefore, terrorism represents an attack on fundamental human rights. For this purpose, the first part of this article focuses on the relationship between terrorism and human rights and aims to show the relationship between these two concepts. In the second part, the concept of cyber threat and its manifestations are discussed. An analysis of the fight against terrorism in the context of human rights was also made.. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sustainable%20development" title="sustainable development">sustainable development</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20rights" title=" human rights"> human rights</a>, <a href="https://publications.waset.org/abstracts/search?q=the%20right%20to%20development" title=" the right to development"> the right to development</a>, <a href="https://publications.waset.org/abstracts/search?q=the%20human%20rights-based%20approach%20to%20development" title=" the human rights-based approach to development"> the human rights-based approach to development</a>, <a href="https://publications.waset.org/abstracts/search?q=environmental%20rights" title=" environmental rights"> environmental rights</a>, <a href="https://publications.waset.org/abstracts/search?q=economic%20development" title=" economic development"> economic development</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20sustainability%20human%20rights%20protection" title=" social sustainability human rights protection"> social sustainability human rights protection</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20rights%20violations" title=" human rights violations"> human rights violations</a>, <a href="https://publications.waset.org/abstracts/search?q=workers%E2%80%99%20rights" title=" workers’ rights"> workers’ rights</a>, <a href="https://publications.waset.org/abstracts/search?q=justice" title=" justice"> justice</a>, <a href="https://publications.waset.org/abstracts/search?q=security." title=" security."> security.</a> </p> <a href="https://publications.waset.org/abstracts/186496/the-impact-of-artificial-intelligence-on-human-rights-legislations-and-evolution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/186496.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">38</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10716</span> Smartphone-Based Human Activity Recognition by Machine Learning Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yanting%20Cao">Yanting Cao</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazumitsu%20Nawata"> Kazumitsu Nawata</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As smartphones upgrading, their software and hardware are getting smarter, so the smartphone-based human activity recognition will be described as more refined, complex, and detailed. In this context, we analyzed a set of experimental data obtained by observing and measuring 30 volunteers with six activities of daily living (ADL). Due to the large sample size, especially a 561-feature vector with time and frequency domain variables, cleaning these intractable features and training a proper model becomes extremely challenging. After a series of feature selection and parameters adjustment, a well-performed SVM classifier has been trained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=smart%20sensors" title="smart sensors">smart sensors</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title=" human activity recognition"> human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligence" title=" artificial intelligence"> artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a> </p> <a href="https://publications.waset.org/abstracts/142359/smartphone-based-human-activity-recognition-by-machine-learning-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142359.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">143</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10715</span> The Nexus between Counter Terrorism and Human Rights with a Perspective on Cyber Terrorism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Allan%20Munyao%20Mukuki">Allan Munyao Mukuki</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The nexus between terrorism and human rights has become a big challenge in the fight against terrorism globally. This is hinged on the fact that terrorism and human rights are interrelated to the extent that, when the former starts, the latter is violated. This direct linkage was recognised in the Vienna Declaration and Programme of Action as adopted by the World Conference on Human Rights in Vienna on 25 June 1993 which agreed that acts of terrorism in all its forms and manifestations are aimed at the destruction of human rights. Hence, terrorism constitutes an assault on our most basic human rights. To this end, the first part of this paper will focus on the nexus between terrorism and human rights and endeavors to draw a co-relation between these two concepts. The second part thereafter will analyse the emerging concept of cyber-terrorism and how it takes place. Further, an analysis of cyber counter-terrorism balanced as against human rights will also be undertaken. This will be done through the analysis of the concept of ‘securitisation’ of human rights as well as the need to create a balance between counterterrorism efforts as against the protection of human rights at all costs. The paper will then concludes with recommendations on how to balance counter-terrorism and human rights in the modern age. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=balance" title="balance">balance</a>, <a href="https://publications.waset.org/abstracts/search?q=counter-terrorism" title=" counter-terrorism"> counter-terrorism</a>, <a href="https://publications.waset.org/abstracts/search?q=cyber-terrorism" title=" cyber-terrorism"> cyber-terrorism</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20rights" title=" human rights"> human rights</a>, <a href="https://publications.waset.org/abstracts/search?q=security" title=" security"> security</a>, <a href="https://publications.waset.org/abstracts/search?q=violation" title=" violation"> violation</a> </p> <a href="https://publications.waset.org/abstracts/59276/the-nexus-between-counter-terrorism-and-human-rights-with-a-perspective-on-cyber-terrorism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59276.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">403</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10714</span> Breast Cancer Survivability Prediction via Classifier Ensemble</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Al-Badrashiny">Mohamed Al-Badrashiny</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdelghani%20Bellaachia"> Abdelghani Bellaachia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na&uml;ıve Bayes algorithms for the underlying classifiers and Na&uml;ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classifier%20ensemble" title="classifier ensemble">classifier ensemble</a>, <a href="https://publications.waset.org/abstracts/search?q=breast%20cancer%20survivability" title=" breast cancer survivability"> breast cancer survivability</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20mining" title=" data mining"> data mining</a>, <a href="https://publications.waset.org/abstracts/search?q=SEER" title=" SEER"> SEER</a> </p> <a href="https://publications.waset.org/abstracts/42621/breast-cancer-survivability-prediction-via-classifier-ensemble" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42621.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">328</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10713</span> Human Dignity as a Source and Limitation of Personal Autonomy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jan%20Podkowik">Jan Podkowik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The article discusses issues of mutual relationships of human dignity and personal autonomy. According to constitutions of many countries and international human rights law, human dignity is a fundamental and inviolable value. It is the source of all freedoms and rights, including personal autonomy. Human dignity, as an inherent, inalienable and non-gradable value comprising an attribute of all people, justifies freedom of action according to one's will and following one's vision of good life. On the other hand, human dignity imposes immanent restrictions to personal autonomy regarding decisions on commercialization of the one’s body, etc. It points to the paradox of dignity – the source of freedom and conditions (basic) of its limitations. The paper shows the theoretical concept of human dignity as an objective value among legal systems, determining the boundaries of legal protection of personal autonomy. It is not, therefore, the relevant perception of human dignity and freedom as opposite values. Reference point has been made the normative provisions of the Polish Constitution and the European Convention on Human Rights and Fundamental Freedoms as well as judgments of constitutional courts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autonomy" title="autonomy">autonomy</a>, <a href="https://publications.waset.org/abstracts/search?q=constitution" title=" constitution"> constitution</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20dignity" title=" human dignity"> human dignity</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20rights" title=" human rights"> human rights</a> </p> <a href="https://publications.waset.org/abstracts/76031/human-dignity-as-a-source-and-limitation-of-personal-autonomy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/76031.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">299</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10712</span> Using Classifiers to Predict Student Outcome at Higher Institute of Telecommunication</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fuad%20M.%20Alkoot">Fuad M. Alkoot</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We aim at highlighting the benefits of classifier systems especially in supporting educational management decisions. The paper aims at using classifiers in an educational application where an outcome is predicted based on given input parameters that represent various conditions at the institute. We present a classifier system that is designed using a limited training set with data for only one semester. The achieved system is able to reach at previously known outcomes accurately. It is also tested on new input parameters representing variations of input conditions to see its prediction on the possible outcome value. Given the supervised expectation of the outcome for the new input we find the system is able to predict the correct outcome. Experiments were conducted on one semester data from two departments only, Switching and Mathematics. Future work on other departments with larger training sets and wider input variations will show additional benefits of classifier systems in supporting the management decisions at an educational institute. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title="machine learning">machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=classifier%20design" title=" classifier design"> classifier design</a>, <a href="https://publications.waset.org/abstracts/search?q=educational%20management" title=" educational management"> educational management</a>, <a href="https://publications.waset.org/abstracts/search?q=outcome%20estimation" title=" outcome estimation"> outcome estimation</a> </p> <a href="https://publications.waset.org/abstracts/50309/using-classifiers-to-predict-student-outcome-at-higher-institute-of-telecommunication" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/50309.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">278</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10711</span> Electromyography Pattern Classification with Laplacian Eigenmaps in Human Running</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elnaz%20Lashgari">Elnaz Lashgari</a>, <a href="https://publications.waset.org/abstracts/search?q=Emel%20Demircan"> Emel Demircan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Electromyography (EMG) is one of the most important interfaces between humans and robots for rehabilitation. Decoding this signal helps to recognize muscle activation and converts it into smooth motion for the robots. Detecting each muscle&rsquo;s pattern during walking and running is vital for improving the quality of a patient&rsquo;s life. In this study, EMG data from 10 muscles in 10 subjects at 4 different speeds were analyzed. EMG signals are nonlinear with high dimensionality. To deal with this challenge, we extracted some features in time-frequency domain and used manifold learning and Laplacian Eigenmaps algorithm to find the intrinsic features that represent data in low-dimensional space. We then used the Bayesian classifier to identify various patterns of EMG signals for different muscles across a range of running speeds. The best result for vastus medialis muscle corresponds to 97.87&plusmn;0.69 for sensitivity and 88.37&plusmn;0.79 for specificity with 97.07&plusmn;0.29 accuracy using Bayesian classifier. The results of this study provide important insight into human movement and its application for robotics research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=electromyography" title="electromyography">electromyography</a>, <a href="https://publications.waset.org/abstracts/search?q=manifold%20learning" title=" manifold learning"> manifold learning</a>, <a href="https://publications.waset.org/abstracts/search?q=ISOMAP" title=" ISOMAP"> ISOMAP</a>, <a href="https://publications.waset.org/abstracts/search?q=Laplacian%20Eigenmaps" title=" Laplacian Eigenmaps"> Laplacian Eigenmaps</a>, <a href="https://publications.waset.org/abstracts/search?q=locally%20linear%20embedding" title=" locally linear embedding"> locally linear embedding</a> </p> <a href="https://publications.waset.org/abstracts/61632/electromyography-pattern-classification-with-laplacian-eigenmaps-in-human-running" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61632.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">361</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10710</span> Random Subspace Neural Classifier for Meteor Recognition in the Night Sky </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Vera">Carlos Vera</a>, <a href="https://publications.waset.org/abstracts/search?q=Tetyana%20Baydyk"> Tetyana Baydyk</a>, <a href="https://publications.waset.org/abstracts/search?q=Ernst%20Kussul"> Ernst Kussul</a>, <a href="https://publications.waset.org/abstracts/search?q=Graciela%20Velasco"> Graciela Velasco</a>, <a href="https://publications.waset.org/abstracts/search?q=Miguel%20Aparicio"> Miguel Aparicio</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article describes the Random Subspace Neural Classifier (RSC) for the recognition of meteors in the night sky. We used images of meteors entering the atmosphere at night between 8:00 p.m.-5: 00 a.m. The objective of this project is to classify meteor and star images (with stars as the image background). The monitoring of the sky and the classification of meteors are made for future applications by scientists. The image database was collected from different websites. We worked with RGB-type images with dimensions of 220x220 pixels stored in the BitMap Protocol (BMP) format. Subsequent window scanning and processing were carried out for each image. The scan window where the characteristics were extracted had the size of 20x20 pixels with a scanning step size of 10 pixels. Brightness, contrast and contour orientation histograms were used as inputs for the RSC. The RSC worked with two classes and classified into: 1) with meteors and 2) without meteors. Different tests were carried out by varying the number of training cycles and the number of images for training and recognition. The percentage error for the neural classifier was calculated. The results show a good RSC classifier response with 89% correct recognition. The results of these experiments are presented and discussed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=contour%20orientation%20histogram" title="contour orientation histogram">contour orientation histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=meteors" title=" meteors"> meteors</a>, <a href="https://publications.waset.org/abstracts/search?q=night%20sky" title=" night sky"> night sky</a>, <a href="https://publications.waset.org/abstracts/search?q=RSC%20neural%20classifier" title=" RSC neural classifier"> RSC neural classifier</a>, <a href="https://publications.waset.org/abstracts/search?q=stars" title=" stars "> stars </a> </p> <a href="https://publications.waset.org/abstracts/136153/random-subspace-neural-classifier-for-meteor-recognition-in-the-night-sky" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136153.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">138</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20action%20classifier&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20action%20classifier&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20action%20classifier&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20action%20classifier&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20action%20classifier&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20action%20classifier&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20action%20classifier&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20action%20classifier&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20action%20classifier&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20action%20classifier&amp;page=357">357</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20action%20classifier&amp;page=358">358</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=human%20action%20classifier&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10