CINXE.COM
Search results for: hand detection
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: hand detection</title> <meta name="description" content="Search results for: hand detection"> <meta name="keywords" content="hand detection"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="hand detection" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="hand detection"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 7038</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: hand detection</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7038</span> Hand Detection and Recognition for Malay Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Noah%20A.%20Rahman">Mohd Noah A. Rahman</a>, <a href="https://publications.waset.org/abstracts/search?q=Afzaal%20H.%20Seyal"> Afzaal H. Seyal</a>, <a href="https://publications.waset.org/abstracts/search?q=Norhafilah%20Bara"> Norhafilah Bara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Developing a software application using an interface with computers and peripheral devices using gestures of human body such as hand movements keeps growing in interest. A review on this hand gesture detection and recognition based on computer vision technique remains a very challenging task. This is to provide more natural, innovative and sophisticated way of non-verbal communication, such as sign language, in human computer interaction. Nevertheless, this paper explores hand detection and hand gesture recognition applying a vision based approach. The hand detection and recognition used skin color spaces such as HSV and YCrCb are applied. However, there are limitations that are needed to be considered. Almost all of skin color space models are sensitive to quickly changing or mixed lighting circumstances. There are certain restrictions in order for the hand recognition to give better results such as the distance of user’s hand to the webcam and the posture and size of the hand. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20detection" title="hand detection">hand detection</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture" title=" hand gesture"> hand gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20recognition" title=" hand recognition"> hand recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a> </p> <a href="https://publications.waset.org/abstracts/46765/hand-detection-and-recognition-for-malay-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46765.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7037</span> Hand Gesture Detection via EmguCV Canny Pruning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20N.%20Mosola">N. N. Mosola</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20J.%20Molete"> S. J. Molete</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20S.%20Masoebe"> L. S. Masoebe</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Letsae"> M. Letsae</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=canny%20pruning" title="canny pruning">canny pruning</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20recognition" title=" hand recognition"> hand recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20tracking" title=" skin tracking"> skin tracking</a> </p> <a href="https://publications.waset.org/abstracts/91296/hand-gesture-detection-via-emgucv-canny-pruning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91296.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">185</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7036</span> Burnout Recognition for Call Center Agents by Using Skin Color Detection with Hand Poses </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=El%20Sayed%20A.%20Sharara">El Sayed A. Sharara</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Tsuji"> A. Tsuji</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Terada"> K. Terada</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Call centers have been expanding and they have influence on activation in various markets increasingly. A call center’s work is known as one of the most demanding and stressful jobs. In this paper, we propose the fatigue detection system in order to detect burnout of call center agents in the case of a neck pain and upper back pain. Our proposed system is based on the computer vision technique combined skin color detection with the Viola-Jones object detector. To recognize the gesture of hand poses caused by stress sign, the YCbCr color space is used to detect the skin color region including face and hand poses around the area related to neck ache and upper back pain. A cascade of clarifiers by Viola-Jones is used for face recognition to extract from the skin color region. The detection of hand poses is given by the evaluation of neck pain and upper back pain by using skin color detection and face recognition method. The system performance is evaluated using two groups of dataset created in the laboratory to simulate call center environment. Our call center agent burnout detection system has been implemented by using a web camera and has been processed by MATLAB. From the experimental results, our system achieved 96.3% for upper back pain detection and 94.2% for neck pain detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=call%20center%20agents" title="call center agents">call center agents</a>, <a href="https://publications.waset.org/abstracts/search?q=fatigue" title=" fatigue"> fatigue</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20color%20detection" title=" skin color detection"> skin color detection</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a> </p> <a href="https://publications.waset.org/abstracts/74913/burnout-recognition-for-call-center-agents-by-using-skin-color-detection-with-hand-poses" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74913.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">294</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7035</span> Vision-Based Hand Segmentation Techniques for Human-Computer Interaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Jebali">M. Jebali</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Jemni"> M. Jemni</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm such as an input to another system which attempt to bring the HCI performance nearby the human-human interaction, by modeling an intelligent sign language recognition system based on prediction in the context of dialogue between the system (avatar) and the interlocutor. For the purpose of hand segmentation, an overcoming occlusion approach has been proposed for superior results for detection of hand from an image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HCI" title="HCI">HCI</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language%20recognition" title=" sign language recognition"> sign language recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20tracking" title=" object tracking"> object tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20segmentation" title=" hand segmentation"> hand segmentation</a> </p> <a href="https://publications.waset.org/abstracts/26490/vision-based-hand-segmentation-techniques-for-human-computer-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26490.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">412</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7034</span> Information Retrieval from Internet Using Hand Gestures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aniket%20S.%20Joshi">Aniket S. Joshi</a>, <a href="https://publications.waset.org/abstracts/search?q=Aditya%20R.%20Mane"> Aditya R. Mane</a>, <a href="https://publications.waset.org/abstracts/search?q=Arjun%20Tukaram"> Arjun Tukaram </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the 21st century, in the era of e-world, people are continuously getting updated by daily information such as weather conditions, news, stock exchange market updates, new projects, cricket updates, sports and other such applications. In the busy situation, they want this information on the little use of keyboard, time. Today in order to get such information user have to repeat same mouse and keyboard actions which includes time and inconvenience. In India due to rural background many people are not much familiar about the use of computer and internet also. Also in small clinics, small offices, and hotels and in the airport there should be a system which retrieves daily information with the minimum use of keyboard and mouse actions. We plan to design application based project that can easily retrieve information with minimum use of keyboard and mouse actions and make our task more convenient and easier. This can be possible with an image processing application which takes real time hand gestures which will get matched by system and retrieve information. Once selected the functions with hand gestures, the system will report action information to user. In this project we use real time hand gesture movements to select required option which is stored on the screen in the form of RSS Feeds. Gesture will select the required option and the information will be popped and we got the information. A real time hand gesture makes the application handier and easier to use. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20detection" title="hand detection">hand detection</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture%20recognition" title=" hand gesture recognition"> hand gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=HSV%20color%20model" title=" HSV color model"> HSV color model</a>, <a href="https://publications.waset.org/abstracts/search?q=Blob%20detection" title=" Blob detection"> Blob detection</a> </p> <a href="https://publications.waset.org/abstracts/29069/information-retrieval-from-internet-using-hand-gestures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29069.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">289</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7033</span> Multiperson Drone Control with Seamless Pilot Switching Using Onboard Camera and Openpose Real-Time Keypoint Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Evan%20Lowhorn">Evan Lowhorn</a>, <a href="https://publications.waset.org/abstracts/search?q=Rocio%20Alba-Flores"> Rocio Alba-Flores</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traditional classification Convolutional Neural Networks (CNN) attempt to classify an image in its entirety. This becomes problematic when trying to perform classification with a drone’s camera in real-time due to unpredictable backgrounds. Object detectors with bounding boxes can be used to isolate individuals and other items, but the original backgrounds remain within these boxes. These basic detectors have been regularly used to determine what type of object an item is, such as “person” or “dog.” Recent advancement in computer vision, particularly with human imaging, is keypoint detection. Human keypoint detection goes beyond bounding boxes to fully isolate humans and plot points, or Regions of Interest (ROI), on their bodies within an image. ROIs can include shoulders, elbows, knees, heads, etc. These points can then be related to each other and used in deep learning methods such as pose estimation. For drone control based on human motions, poses, or signals using the onboard camera, it is important to have a simple method for pilot identification among multiple individuals while also giving the pilot fine control options for the drone. To achieve this, the OpenPose keypoint detection network was used with body and hand keypoint detection enabled. OpenPose supports the ability to combine multiple keypoint detection methods in real-time with a single network. Body keypoint detection allows simple poses to act as the pilot identifier. The hand keypoint detection with ROIs for each finger can then offer a greater variety of signal options for the pilot once identified. For this work, the individual must raise their non-control arm to be identified as the operator and send commands with the hand on their other arm. The drone ignores all other individuals in the onboard camera feed until the current operator lowers their non-control arm. When another individual wish to operate the drone, they simply raise their arm once the current operator relinquishes control, and then they can begin controlling the drone with their other hand. This is all performed mid-flight with no landing or script editing required. When using a desktop with a discrete NVIDIA GPU, the drone’s 2.4 GHz Wi-Fi connection combined with OpenPose restrictions to only body and hand allows this control method to perform as intended while maintaining the responsiveness required for practical use. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=drone%20control" title=" drone control"> drone control</a>, <a href="https://publications.waset.org/abstracts/search?q=keypoint%20detection" title=" keypoint detection"> keypoint detection</a>, <a href="https://publications.waset.org/abstracts/search?q=openpose" title=" openpose"> openpose</a> </p> <a href="https://publications.waset.org/abstracts/139752/multiperson-drone-control-with-seamless-pilot-switching-using-onboard-camera-and-openpose-real-time-keypoint-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139752.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">184</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7032</span> Hull Detection from Handwritten Digit Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sriraman%20Kothuri">Sriraman Kothuri</a>, <a href="https://publications.waset.org/abstracts/search?q=Komal%20Teja%20Mattupalli"> Komal Teja Mattupalli</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper we proposed a novel algorithm for recognizing hulls in a hand written digits. This is an extension to the work on “Digit Recognition Using Freeman Chain code”. In order to find out the hulls in a user given digit it is necessary to follow three steps. Those are pre-processing, Boundary Extraction and at last apply the Hull Detection system in a way to attain the better results. The detection of Hull Regions is mainly intended to increase the machine learning capability in detection of characters or digits. This can also extend this in order to get the hull regions and their intensities in Black Holes in Space Exploration. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chain%20code" title="chain code">chain code</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=hull%20regions" title=" hull regions"> hull regions</a>, <a href="https://publications.waset.org/abstracts/search?q=hull%20recognition%20system" title=" hull recognition system"> hull recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=SASK%20algorithm" title=" SASK algorithm"> SASK algorithm</a> </p> <a href="https://publications.waset.org/abstracts/15864/hull-detection-from-handwritten-digit-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15864.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">400</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7031</span> Towards Integrating Statistical Color Features for Human Skin Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Zamri%20Osman">Mohd Zamri Osman</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Aizaini%20Maarof"> Mohd Aizaini Maarof</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Foad%20Rohani"> Mohd Foad Rohani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human skin detection recognized as the primary step in most of the applications such as face detection, illicit image filtering, hand recognition and video surveillance. The performance of any skin detection applications greatly relies on the two components: feature extraction and classification method. Skin color is the most vital information used for skin detection purpose. However, color feature alone sometimes could not handle images with having same color distribution with skin color. A color feature of pixel-based does not eliminate the skin-like color due to the intensity of skin and skin-like color fall under the same distribution. Hence, the statistical color analysis will be exploited such mean and standard deviation as an additional feature to increase the reliability of skin detector. In this paper, we studied the effectiveness of statistical color feature for human skin detection. Furthermore, the paper analyzed the integrated color and texture using eight classifiers with three color spaces of RGB, YCbCr, and HSV. The experimental results show that the integrating statistical feature using Random Forest classifier achieved a significant performance with an F1-score 0.969. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20space" title="color space">color space</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=random%20forest" title=" random forest"> random forest</a>, <a href="https://publications.waset.org/abstracts/search?q=skin%20detection" title=" skin detection"> skin detection</a>, <a href="https://publications.waset.org/abstracts/search?q=statistical%20feature" title=" statistical feature"> statistical feature</a> </p> <a href="https://publications.waset.org/abstracts/43485/towards-integrating-statistical-color-features-for-human-skin-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43485.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7030</span> Patient-Friendly Hand Gesture Recognition Using AI</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20Prabhu">K. Prabhu</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Dinesh"> K. Dinesh</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Ranjani"> M. Ranjani</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Suhitha"> M. Suhitha</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the tough times of covid, those people who were hospitalized found it difficult to always convey what they wanted to or needed to the attendee. Sometimes the attendees might also not be there. In that case, the patients can use simple hand gestures to control electrical appliances (like its set it for a zero watts bulb)and three other gestures for voice note intimation. In this AI-based hand recognition project, NodeMCU is used for the control action of the relay, and it is connected to the firebase for storing the value in the cloud and is interfaced with the python code via raspberry pi. For three hand gestures, a voice clip is added for intimation to the attendee. This is done with the help of Google’s text to speech and the inbuilt audio file option in the raspberry pi 4. All the five gestures will be detected when shown with their hands via the webcam, which is placed for gesture detection. The personal computer is used for displaying the gestures and for running the code in the raspberry pi imager. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nodeMCU" title="nodeMCU">nodeMCU</a>, <a href="https://publications.waset.org/abstracts/search?q=AI%20technology" title=" AI technology"> AI technology</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture" title=" gesture"> gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=patient" title=" patient"> patient</a> </p> <a href="https://publications.waset.org/abstracts/144943/patient-friendly-hand-gesture-recognition-using-ai" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144943.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">167</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7029</span> Efficient Signal Detection Using QRD-M Based on Channel Condition in MIMO-OFDM System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jae-Jeong%20Kim">Jae-Jeong Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ki-Ro%20Kim"> Ki-Ro Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyoung-Kyu%20Song"> Hyoung-Kyu Song</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose an efficient signal detector that switches M parameter of QRD-M detection scheme is proposed for MIMO-OFDM system. The proposed detection scheme calculates the threshold by 1-norm condition number and then switches M parameter of QRD-M detection scheme according to channel information. If channel condition is bad, the parameter M is set to high value to increase the accuracy of detection. If channel condition is good, the parameter M is set to low value to reduce complexity of detection. Therefore, the proposed detection scheme has better trade off between BER performance and complexity than the conventional detection scheme. The simulation result shows that the complexity of proposed detection scheme is lower than QRD-M detection scheme with similar BER performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=MIMO-OFDM" title="MIMO-OFDM">MIMO-OFDM</a>, <a href="https://publications.waset.org/abstracts/search?q=QRD-M" title=" QRD-M"> QRD-M</a>, <a href="https://publications.waset.org/abstracts/search?q=channel%20condition" title=" channel condition"> channel condition</a>, <a href="https://publications.waset.org/abstracts/search?q=BER" title=" BER"> BER</a> </p> <a href="https://publications.waset.org/abstracts/3518/efficient-signal-detection-using-qrd-m-based-on-channel-condition-in-mimo-ofdm-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3518.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">370</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7028</span> Reduced Complexity of ML Detection Combined with DFE</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jae-Hyun%20Ro">Jae-Hyun Ro</a>, <a href="https://publications.waset.org/abstracts/search?q=Yong-Jun%20Kim"> Yong-Jun Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Chang-Bin%20Ha"> Chang-Bin Ha</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyoung-Kyu%20Song"> Hyoung-Kyu Song </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, many detection schemes have been developed to improve the error performance and to reduce the complexity. Maximum likelihood (ML) detection has optimal error performance but it has very high complexity. Thus, this paper proposes reduced complexity of ML detection combined with decision feedback equalizer (DFE). The error performance of the proposed detection scheme is higher than the conventional DFE. But the complexity of the proposed scheme is lower than the conventional ML detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=detection" title="detection">detection</a>, <a href="https://publications.waset.org/abstracts/search?q=DFE" title=" DFE"> DFE</a>, <a href="https://publications.waset.org/abstracts/search?q=MIMO-OFDM" title=" MIMO-OFDM"> MIMO-OFDM</a>, <a href="https://publications.waset.org/abstracts/search?q=ML" title=" ML"> ML</a> </p> <a href="https://publications.waset.org/abstracts/42215/reduced-complexity-of-ml-detection-combined-with-dfe" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42215.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">610</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7027</span> Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Saad%20M.%20Darwish">Saad M. Darwish</a>, <a href="https://publications.waset.org/abstracts/search?q=Magda%20M.%20Madbouly"> Magda M. Madbouly</a>, <a href="https://publications.waset.org/abstracts/search?q=Murad%20B.%20Khorsheed"> Murad B. Khorsheed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture%20recognition" title="hand gesture recognition">hand gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20detection" title=" hand detection"> hand detection</a>, <a href="https://publications.waset.org/abstracts/search?q=type-2%20fuzzy%20logic" title=" type-2 fuzzy logic"> type-2 fuzzy logic</a>, <a href="https://publications.waset.org/abstracts/search?q=hidden%20Markov%20Model" title=" hidden Markov Model "> hidden Markov Model </a> </p> <a href="https://publications.waset.org/abstracts/18565/hand-gesture-recognition-for-sign-language-a-new-higher-order-fuzzy-hmm-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18565.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7026</span> Users’ Preferences for Map Navigation Gestures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Y.%20Y.%20Pang">Y. Y. Pang</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20A.%20Ismail"> N. A. Ismail</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The map is a powerful and convenient tool in helping us to navigate to different places, but the use of indirect devices often makes its usage cumbersome. This study intends to propose a new map navigation dialogue that uses hand gesture. A set of dialogue was developed from users’ perspective to provide users complete freedom for panning, zooming, rotate, and find direction operations. A participatory design experiment was involved here where one hand gesture and two hand gesture dialogues had been analysed in the forms of hand gestures to develop a set of usable dialogues. The major finding was that users prefer one-hand gesture compared to two-hand gesture in map navigation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture" title="hand gesture">hand gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=map%20navigation" title=" map navigation"> map navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=participatory%20design" title=" participatory design"> participatory design</a>, <a href="https://publications.waset.org/abstracts/search?q=intuitive%20interaction" title=" intuitive interaction"> intuitive interaction</a> </p> <a href="https://publications.waset.org/abstracts/19455/users-preferences-for-map-navigation-gestures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19455.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">280</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7025</span> Hand Hygiene Habits of Ghanaian Youths in Accra</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cecilia%20Amponsem-Boateng">Cecilia Amponsem-Boateng</a>, <a href="https://publications.waset.org/abstracts/search?q=Timothy%20B.%20Oppong"> Timothy B. Oppong</a>, <a href="https://publications.waset.org/abstracts/search?q=Haiyan%20Yang"> Haiyan Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Guangcai%20Duan"> Guangcai Duan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The human palm has been identified as one of the richest habitats for human microbial accommodation making hand hygiene essential to primary prevention of infection. Since the hand is in constant contact with fomites which have been proven to be mostly contaminated, building hand hygiene habits is essential for the prevention of infection. This research was conducted to assess the hand hygiene habits of Ghanaian youths in Accra. This study used a survey as a quantitative method of research. The findings of the study revealed that out of the 254 participants who fully answered the questionnaire, 22% had the habit of washing their hands after outings while only 51.6% had the habit of washing their hands after using the bathroom. However, about 60% of the participants said they sometimes ate with their hands while 28.9% had the habit of eating with the hand very often, a situation that put them at risk of infection from their hands since some participants had poor handwashing habits; prompting the need for continuous education on hand hygiene. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20hygiene" title="hand hygiene">hand hygiene</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20hygiene%20habit" title=" hand hygiene habit"> hand hygiene habit</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20washing" title=" hand washing"> hand washing</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20sanitizer%20use" title=" hand sanitizer use"> hand sanitizer use</a> </p> <a href="https://publications.waset.org/abstracts/159334/hand-hygiene-habits-of-ghanaian-youths-in-accra" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159334.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">108</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7024</span> Time Parameter Based for the Detection of Catastrophic Faults in Analog Circuits </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arabi%20Abderrazak">Arabi Abderrazak</a>, <a href="https://publications.waset.org/abstracts/search?q=Bourouba%20Nacerdine"> Bourouba Nacerdine</a>, <a href="https://publications.waset.org/abstracts/search?q=Ayad%20Mouloud"> Ayad Mouloud</a>, <a href="https://publications.waset.org/abstracts/search?q=Belaout%20Abdeslam"> Belaout Abdeslam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a new test technique of analog circuits using time mode simulation is proposed for the single catastrophic faults detection in analog circuits. This test process is performed to overcome the problem of catastrophic faults being escaped in a DC mode test applied to the inverter amplifier in previous research works. The circuit under test is a second-order low pass filter constructed around this type of amplifier but performing a function that differs from that of the previous test. The test approach performed in this work is based on two key- elements where the first one concerns the unique square pulse signal selected as an input vector test signal to stimulate the fault effect at the circuit output response. The second element is the filter response conversion to a square pulses sequence obtained from an analog comparator. This signal conversion is achieved through a fixed reference threshold voltage of this comparison circuit. The measurement of the three first response signal pulses durations is regarded as fault effect detection parameter on one hand, and as a fault signature helping to hence fully establish an analog circuit fault diagnosis on another hand. The results obtained so far are very promising since the approach has lifted up the fault coverage ratio in both modes to over 90% and has revealed the harmful side of faults that has been masked in a DC mode test. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=analog%20circuits" title="analog circuits">analog circuits</a>, <a href="https://publications.waset.org/abstracts/search?q=analog%20faults%20diagnosis" title=" analog faults diagnosis"> analog faults diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=catastrophic%20faults" title=" catastrophic faults"> catastrophic faults</a>, <a href="https://publications.waset.org/abstracts/search?q=fault%20detection" title=" fault detection"> fault detection</a> </p> <a href="https://publications.waset.org/abstracts/38309/time-parameter-based-for-the-detection-of-catastrophic-faults-in-analog-circuits" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38309.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">442</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7023</span> Cigarette Smoke Detection Based on YOLOV3</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei%20Li">Wei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Tuo%20Yang"> Tuo Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to satisfy the real-time and accurate requirements of cigarette smoke detection in complex scenes, a cigarette smoke detection technology based on the combination of deep learning and color features was proposed. Firstly, based on the color features of cigarette smoke, the suspicious cigarette smoke area in the image is extracted. Secondly, combined with the efficiency of cigarette smoke detection and the problem of network overfitting, a network model for cigarette smoke detection was designed according to YOLOV3 algorithm to reduce the false detection rate. The experimental results show that the method is feasible and effective, and the accuracy of cigarette smoke detection is up to 99.13%, which satisfies the requirements of real-time cigarette smoke detection in complex scenes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=cigarette%20smoke%20detection" title=" cigarette smoke detection"> cigarette smoke detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV3" title=" YOLOV3"> YOLOV3</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction" title=" color feature extraction"> color feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/159151/cigarette-smoke-detection-based-on-yolov3" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159151.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">87</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7022</span> An Architecture for New Generation of Distributed Intrusion Detection System Based on Preventive Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.%20Benmoussa">H. Benmoussa</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20A.%20El%20Kalam"> A. A. El Kalam</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Ait%20Ouahman"> A. Ait Ouahman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The design and implementation of intrusion detection systems (IDS) remain an important area of research in the security of information systems. Despite the importance and reputation of the current intrusion detection systems, their efficiency and effectiveness remain limited as they should include active defense approach to allow anticipating and predicting intrusions before their occurrence. Consequently, they must be readapted. For this purpose we suggest a new generation of distributed intrusion detection system based on preventive detection approach and using intelligent and mobile agents. Our architecture benefits from mobile agent features and addresses some of the issues with centralized and hierarchical models. Also, it presents advantages in terms of increasing scalability and flexibility. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Intrusion%20Detection%20System%20%28IDS%29" title="Intrusion Detection System (IDS)">Intrusion Detection System (IDS)</a>, <a href="https://publications.waset.org/abstracts/search?q=preventive%20detection" title=" preventive detection"> preventive detection</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20agents" title=" mobile agents"> mobile agents</a>, <a href="https://publications.waset.org/abstracts/search?q=distributed%20architecture" title=" distributed architecture"> distributed architecture</a> </p> <a href="https://publications.waset.org/abstracts/18239/an-architecture-for-new-generation-of-distributed-intrusion-detection-system-based-on-preventive-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18239.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">583</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7021</span> Video Based Ambient Smoke Detection By Detecting Directional Contrast Decrease</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Omair%20Ghori">Omair Ghori</a>, <a href="https://publications.waset.org/abstracts/search?q=Anton%20Stadler"> Anton Stadler</a>, <a href="https://publications.waset.org/abstracts/search?q=Stefan%20Wilk"> Stefan Wilk</a>, <a href="https://publications.waset.org/abstracts/search?q=Wolfgang%20Effelsberg"> Wolfgang Effelsberg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fire-related incidents account for extensive loss of life and material damage. Quick and reliable detection of occurring fires has high real world implications. Whereas a major research focus lies on the detection of outdoor fires, indoor camera-based fire detection is still an open issue. Cameras in combination with computer vision helps to detect flames and smoke more quickly than conventional fire detectors. In this work, we present a computer vision-based smoke detection algorithm based on contrast changes and a multi-step classification. This work accelerates computer vision-based fire detection considerably in comparison with classical indoor-fire detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=contrast%20analysis" title="contrast analysis">contrast analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=early%20fire%20detection" title=" early fire detection"> early fire detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20smoke%20detection" title=" video smoke detection"> video smoke detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/52006/video-based-ambient-smoke-detection-by-detecting-directional-contrast-decrease" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52006.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7020</span> Fused Structure and Texture (FST) Features for Improved Pedestrian Detection </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hussin%20K.%20Ragb">Hussin K. Ragb</a>, <a href="https://publications.waset.org/abstracts/search?q=Vijayan%20K.%20Asari"> Vijayan K. Asari </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pedestrian%20detection" title="pedestrian detection">pedestrian detection</a>, <a href="https://publications.waset.org/abstracts/search?q=phase%20congruency" title=" phase congruency"> phase congruency</a>, <a href="https://publications.waset.org/abstracts/search?q=local%20phase" title=" local phase"> local phase</a>, <a href="https://publications.waset.org/abstracts/search?q=LBP%20features" title=" LBP features"> LBP features</a>, <a href="https://publications.waset.org/abstracts/search?q=CSLBP%20features" title=" CSLBP features"> CSLBP features</a>, <a href="https://publications.waset.org/abstracts/search?q=FST%20descriptor" title=" FST descriptor"> FST descriptor</a> </p> <a href="https://publications.waset.org/abstracts/36643/fused-structure-and-texture-fst-features-for-improved-pedestrian-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36643.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">488</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7019</span> Deep Learning and Accurate Performance Measure Processes for Cyber Attack Detection among Web Logs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Noureddine%20Mohtaram">Noureddine Mohtaram</a>, <a href="https://publications.waset.org/abstracts/search?q=Jeremy%20Patrix"> Jeremy Patrix</a>, <a href="https://publications.waset.org/abstracts/search?q=Jerome%20Verny"> Jerome Verny</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As an enormous number of online services have been developed into web applications, security problems based on web applications are becoming more serious now. Most intrusion detection systems rely on each request to find the cyber-attack rather than on user behavior, and these systems can only protect web applications against known vulnerabilities rather than certain zero-day attacks. In order to detect new attacks, we analyze the HTTP protocols of web servers to divide them into two categories: normal attacks and malicious attacks. On the other hand, the quality of the results obtained by deep learning (DL) in various areas of big data has given an important motivation to apply it to cybersecurity. Deep learning for attack detection in cybersecurity has the potential to be a robust tool from small transformations to new attacks due to its capability to extract more high-level features. This research aims to take a new approach, deep learning to cybersecurity, to classify these two categories to eliminate attacks and protect web servers of the defense sector which encounters different web traffic compared to other sectors (such as e-commerce, web app, etc.). The result shows that by using a machine learning method, a higher accuracy rate, and a lower false alarm detection rate can be achieved. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=anomaly%20detection" title="anomaly detection">anomaly detection</a>, <a href="https://publications.waset.org/abstracts/search?q=HTTP%20protocol" title=" HTTP protocol"> HTTP protocol</a>, <a href="https://publications.waset.org/abstracts/search?q=logs" title=" logs"> logs</a>, <a href="https://publications.waset.org/abstracts/search?q=cyber%20attack" title=" cyber attack"> cyber attack</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/136582/deep-learning-and-accurate-performance-measure-processes-for-cyber-attack-detection-among-web-logs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136582.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">211</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7018</span> Intrusion Detection Techniques in NaaS in the Cloud: A Review </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rashid%20Mahmood">Rashid Mahmood</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The network as a service (NaaS) usage has been well-known from the last few years in the many applications, like mission critical applications. In the NaaS, prevention method is not adequate as the security concerned, so the detection method should be added to the security issues in NaaS. The authentication and encryption are considered the first solution of the NaaS problem whereas now these are not sufficient as NaaS use is increasing. In this paper, we are going to present the concept of intrusion detection and then survey some of major intrusion detection techniques in NaaS and aim to compare in some important fields. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=IDS" title="IDS">IDS</a>, <a href="https://publications.waset.org/abstracts/search?q=cloud" title=" cloud"> cloud</a>, <a href="https://publications.waset.org/abstracts/search?q=naas" title=" naas"> naas</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a> </p> <a href="https://publications.waset.org/abstracts/36475/intrusion-detection-techniques-in-naas-in-the-cloud-a-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36475.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7017</span> Design and Development of Automatic Onion Harvester</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=P.%20Revathi">P. Revathi</a>, <a href="https://publications.waset.org/abstracts/search?q=T.%20Mrunalini"> T. Mrunalini</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Padma%20Priya"> K. Padma Priya</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Ramya"> P. Ramya</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Saranya"> R. Saranya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> During the tough times of covid, those people who were hospitalized found it difficult to always convey what they wanted to or needed to the attendee. Sometimes the attendees might also not be there. In that case, the patients can use simple hand gestures to control electrical appliances (like its set it for a zero watts bulb)and three other gestures for voice note intimation. In this AI-based hand recognition project, NodeMCU is used for the control action of the relay, and it is connected to the firebase for storing the value in the cloud and is interfaced with the python code via raspberry pi. For three hand gestures, a voice clip is added for intimation to the attendee. This is done with the help of Google’s text to speech and the inbuilt audio file option in the raspberry pi 4. All the 5 gestures will be detected when shown with their hands via a webcam which is placed for gesture detection. A personal computer is used for displaying the gestures and for running the code in the raspberry pi imager. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=onion%20harvesting" title="onion harvesting">onion harvesting</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic%20pluging" title=" automatic pluging"> automatic pluging</a>, <a href="https://publications.waset.org/abstracts/search?q=camera" title=" camera"> camera</a>, <a href="https://publications.waset.org/abstracts/search?q=raspberry%20pi" title=" raspberry pi"> raspberry pi</a> </p> <a href="https://publications.waset.org/abstracts/144945/design-and-development-of-automatic-onion-harvester" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144945.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">198</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7016</span> Dual Mode “Turn On-Off-On” Photoluminescence Detection of EDTA and Lead Using Moringa Oleifera Gum-Derived Carbon Dots</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anisha%20Mandal">Anisha Mandal</a>, <a href="https://publications.waset.org/abstracts/search?q=Swambabu%20Varanasi"> Swambabu Varanasi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Lead is one of the most prevalent toxic heavy metal ions, and its pollution poses a significant threat to the environment and human health. On the other hand, Ethylenediaminetetraacetic acid is a widely used metal chelating agent that, due to its poor biodegradability, is an incessant pollutant to the environment. For the first time, a green, simple, and cost-effective approach is used to hydrothermally synthesise photoluminescent carbon dots using Moringa Oleifera Gum in a single step. Then, using Moringa Oleifera Gum-derived carbon dots, a photoluminescent "ON-OFF-ON" mechanism for dual mode detection of trace Pb2+ and EDTA was proposed. MOG-CDs detect Pb2+ selectively and sensitively using a photoluminescence quenching mechanism, with a detection limit (LOD) of 0.000472 ppm. (1.24 nM). The quenched photoluminescence can be restored by adding EDTA to the MOG-CD+Pb2+ system; this strategy is used to quantify EDTA at a level of detection of 0.0026 ppm. (8.9 nM). The quantification of Pb2+ and EDTA in actual samples encapsulated the applicability and dependability of the proposed photoluminescent probe. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=carbon%20dots" title="carbon dots">carbon dots</a>, <a href="https://publications.waset.org/abstracts/search?q=photoluminescence" title=" photoluminescence"> photoluminescence</a>, <a href="https://publications.waset.org/abstracts/search?q=sensor" title=" sensor"> sensor</a>, <a href="https://publications.waset.org/abstracts/search?q=moringa%20oleifera%20gum" title=" moringa oleifera gum"> moringa oleifera gum</a> </p> <a href="https://publications.waset.org/abstracts/165332/dual-mode-turn-on-off-on-photoluminescence-detection-of-edta-and-lead-using-moringa-oleifera-gum-derived-carbon-dots" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165332.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">114</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7015</span> Securing Web Servers by the Intrusion Detection System (IDS)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousef%20Farhaoui">Yousef Farhaoui </a> </p> <p class="card-text"><strong>Abstract:</strong></p> An IDS is a tool which is used to improve the level of security. We present in this paper different architectures of IDS. We will also discuss measures that define the effectiveness of IDS and the very recent works of standardization and homogenization of IDS. At the end, we propose a new model of IDS called BiIDS (IDS Based on the two principles of detection) for securing web servers and applications by the Intrusion Detection System (IDS). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intrusion%20detection" title="intrusion detection">intrusion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=architectures" title=" architectures"> architectures</a>, <a href="https://publications.waset.org/abstracts/search?q=characteristic" title=" characteristic"> characteristic</a>, <a href="https://publications.waset.org/abstracts/search?q=tools" title=" tools"> tools</a>, <a href="https://publications.waset.org/abstracts/search?q=security" title=" security"> security</a>, <a href="https://publications.waset.org/abstracts/search?q=web%20server" title=" web server"> web server</a> </p> <a href="https://publications.waset.org/abstracts/13346/securing-web-servers-by-the-intrusion-detection-system-ids" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13346.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">418</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7014</span> Land Use Change Detection Using Satellite Images for Najran City, Kingdom of Saudi Arabia (KSA)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ismail%20Elkhrachy">Ismail Elkhrachy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Determination of land use changing is an important component of regional planning for applications ranging from urban fringe change detection to monitoring change detection of land use. This data are very useful for natural resources management.On the other hand, the technologies and methods of change detection also have evolved dramatically during past 20 years. So it has been well recognized that the change detection had become the best methods for researching dynamic change of land use by multi-temporal remotely-sensed data. The objective of this paper is to assess, evaluate and monitor land use change surrounding the area of Najran city, Kingdom of Saudi Arabia (KSA) using Landsat images (June 23, 2009) and ETM+ image(June. 21, 2014). The post-classification change detection technique was applied. At last,two-time subset images of Najran city are compared on a pixel-by-pixel basis using the post-classification comparison method and the from-to change matrix is produced, the land use change information obtained.Three classes were obtained, urban, bare land and agricultural land from unsupervised classification method by using Erdas Imagine and ArcGIS software. Accuracy assessment of classification has been performed before calculating change detection for study area. The obtained accuracy is between 61% to 87% percent for all the classes. Change detection analysis shows that rapid growth in urban area has been increased by 73.2%, the agricultural area has been decreased by 10.5 % and barren area reduced by 7% between 2009 and 2014. The quantitative study indicated that the area of urban class has unchanged by 58.2 km〗^2, gained 70.3 〖km〗^2 and lost 16 〖km〗^2. For bare land class 586.4〖km〗^2 has unchanged, 53.2〖km〗^2 has gained and 101.5〖km〗^2 has lost. While agriculture area class, 20.2〖km〗^2 has unchanged, 31.2〖km〗^2 has gained and 37.2〖km〗^2 has lost. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=land%20use" title="land use">land use</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=change%20detection" title=" change detection"> change detection</a>, <a href="https://publications.waset.org/abstracts/search?q=satellite%20images" title=" satellite images"> satellite images</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20classification" title=" image classification"> image classification</a> </p> <a href="https://publications.waset.org/abstracts/22554/land-use-change-detection-using-satellite-images-for-najran-city-kingdom-of-saudi-arabia-ksa" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22554.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">524</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7013</span> Violence Detection and Tracking on Moving Surveillance Video Using Machine Learning Approach</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abe%20Degale%20D.">Abe Degale D.</a>, <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Jian"> Cheng Jian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> When creating automated video surveillance systems, violent action recognition is crucial. In recent years, hand-crafted feature detectors have been the primary method for achieving violence detection, such as the recognition of fighting activity. Researchers have also looked into learning-based representational models. On benchmark datasets created especially for the detection of violent sequences in sports and movies, these methods produced good accuracy results. The Hockey dataset's videos with surveillance camera motion present challenges for these algorithms for learning discriminating features. Image recognition and human activity detection challenges have shown success with deep representation-based methods. For the purpose of detecting violent images and identifying aggressive human behaviours, this research suggested a deep representation-based model using the transfer learning idea. The results show that the suggested approach outperforms state-of-the-art accuracy levels by learning the most discriminating features, attaining 99.34% and 99.98% accuracy levels on the Hockey and Movies datasets, respectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=violence%20detection" title="violence detection">violence detection</a>, <a href="https://publications.waset.org/abstracts/search?q=faster%20RCNN" title=" faster RCNN"> faster RCNN</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning%20and" title=" transfer learning and"> transfer learning and</a>, <a href="https://publications.waset.org/abstracts/search?q=surveillance%20video" title=" surveillance video"> surveillance video</a> </p> <a href="https://publications.waset.org/abstracts/171296/violence-detection-and-tracking-on-moving-surveillance-video-using-machine-learning-approach" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171296.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">108</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7012</span> Evaluation of Hand Grip Strength and EMG Signal on Visual Reaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sung-Wook%20Shin">Sung-Wook Shin</a>, <a href="https://publications.waset.org/abstracts/search?q=Sung-Taek%20Chung"> Sung-Taek Chung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Hand grip strength has been utilized as an indicator to evaluate the motor ability of hands, responsible for performing multiple body functions. It is, however, difficult to evaluate other factors (other than hand muscular strength) utilizing the hand grip strength only. In this study, we analyzed the motor ability of hands using EMG and the hand grip strength, simultaneously in order to evaluate concentration, muscular strength reaction time, instantaneous muscular strength change, and agility in response to visual reaction. In results, the average time (and their standard deviations) of muscular strength reaction EMG signal and hand grip strength was found to be 209.6 ± 56.2 ms and 354.3 ± 54.6 ms, respectively. In addition, the onset time which represents acceleration time to reach 90% of maximum hand grip strength, was 382.9 ± 129.9 ms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20grip%20strength" title="hand grip strength">hand grip strength</a>, <a href="https://publications.waset.org/abstracts/search?q=EMG" title=" EMG"> EMG</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20reaction" title=" visual reaction"> visual reaction</a>, <a href="https://publications.waset.org/abstracts/search?q=endurance" title=" endurance"> endurance</a> </p> <a href="https://publications.waset.org/abstracts/11414/evaluation-of-hand-grip-strength-and-emg-signal-on-visual-reaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11414.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">462</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7011</span> Suggestion for Malware Detection Agent Considering Network Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ji-Hoon%20Hong">Ji-Hoon Hong</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong-Hee%20Kim"> Dong-Hee Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nam-Uk%20Kim"> Nam-Uk Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Tai-Myoung%20Chung"> Tai-Myoung Chung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Smartphone users are increasing rapidly. Accordingly, many companies are running BYOD (Bring Your Own Device: Policies to bring private-smartphones to the company) policy to increase work efficiency. However, smartphones are always under the threat of malware, thus the company network that is connected smartphone is exposed to serious risks. Most smartphone malware detection techniques are to perform an independent detection (perform the detection of a single target application). In this paper, we analyzed a variety of intrusion detection techniques. Based on the results of analysis propose an agent using the network IDS. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=android%20malware%20detection" title="android malware detection">android malware detection</a>, <a href="https://publications.waset.org/abstracts/search?q=software-defined%20network" title=" software-defined network"> software-defined network</a>, <a href="https://publications.waset.org/abstracts/search?q=interaction%20environment" title=" interaction environment"> interaction environment</a>, <a href="https://publications.waset.org/abstracts/search?q=android%20malware%20detection" title=" android malware detection"> android malware detection</a>, <a href="https://publications.waset.org/abstracts/search?q=software-defined%20network" title=" software-defined network"> software-defined network</a>, <a href="https://publications.waset.org/abstracts/search?q=interaction%20environment" title=" interaction environment"> interaction environment</a> </p> <a href="https://publications.waset.org/abstracts/39330/suggestion-for-malware-detection-agent-considering-network-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39330.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">433</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7010</span> Improved Skin Detection Using Colour Space and Texture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Medjram%20Sofiane">Medjram Sofiane</a>, <a href="https://publications.waset.org/abstracts/search?q=Babahenini%20Mohamed%20Chaouki"> Babahenini Mohamed Chaouki</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Benali%20Yamina"> Mohamed Benali Yamina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Skin detection is an important task for computer vision systems. A good method for skin detection means a good and successful result of the system. The colour is a good descriptor that allows us to detect skin colour in the images, but because of lightings effects and objects that have a similar colour skin, skin detection becomes difficult. In this paper, we proposed a method using the YCbCr colour space for skin detection and lighting effects elimination, then we use the information of texture to eliminate the false regions detected by the YCbCr colour skin model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=skin%20detection" title="skin detection">skin detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YCbCr" title=" YCbCr"> YCbCr</a>, <a href="https://publications.waset.org/abstracts/search?q=GLCM" title=" GLCM"> GLCM</a>, <a href="https://publications.waset.org/abstracts/search?q=texture" title=" texture"> texture</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20skin" title=" human skin"> human skin</a> </p> <a href="https://publications.waset.org/abstracts/19039/improved-skin-detection-using-colour-space-and-texture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19039.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">459</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7009</span> Hand Gestures Based Emotion Identification Using Flex Sensors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Ali">S. Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Yunus"> R. Yunus</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Arif"> A. Arif</a>, <a href="https://publications.waset.org/abstracts/search?q=Y.%20Ayaz"> Y. Ayaz</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Baber%20Sial"> M. Baber Sial</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Asif"> R. Asif</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Naseer"> N. Naseer</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Jawad%20Khan"> M. Jawad Khan </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this study, we have proposed a gesture to emotion recognition method using flex sensors mounted on metacarpophalangeal joints. The flex sensors are fixed in a wearable glove. The data from the glove are sent to PC using Wi-Fi. Four gestures: finger pointing, thumbs up, fist open and fist close are performed by five subjects. Each gesture is categorized into sad, happy, and excited class based on the velocity and acceleration of the hand gesture. Seventeen inspectors observed the emotions and hand gestures of the five subjects. The emotional state based on the investigators assessment and acquired movement speed data is compared. Overall, we achieved 77% accurate results. Therefore, the proposed design can be used for emotional state detection applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=emotion%20identification" title="emotion identification">emotion identification</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20models" title=" emotion models"> emotion models</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20perception" title=" user perception"> user perception</a> </p> <a href="https://publications.waset.org/abstracts/98297/hand-gestures-based-emotion-identification-using-flex-sensors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98297.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">285</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20detection&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20detection&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20detection&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20detection&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20detection&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20detection&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20detection&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20detection&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20detection&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20detection&page=234">234</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20detection&page=235">235</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=hand%20detection&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>