CINXE.COM

Search results for: key-point detection and description

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: key-point detection and description</title> <meta name="description" content="Search results for: key-point detection and description"> <meta name="keywords" content="key-point detection and description"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="key-point detection and description" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="key-point detection and description"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 4285</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: key-point detection and description</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4285</span> Keypoint Detection Method Based on Multi-Scale Feature Fusion of Attention Mechanism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiaoxiao%20Li">Xiaoxiao Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Shuangcheng%20Jia"> Shuangcheng Jia</a>, <a href="https://publications.waset.org/abstracts/search?q=Qian%20Li"> Qian Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Keypoint detection has always been a challenge in the field of image recognition. This paper proposes a novelty keypoint detection method which is called Multi-Scale Feature Fusion Convolutional Network with Attention (MFFCNA). We verified that the multi-scale features with the attention mechanism module have better feature expression capability. The feature fusion between different scales makes the information that the network model can express more abundant, and the network is easier to converge. On our self-made street sign corner dataset, we validate the MFFCNA model with an accuracy of 97.8% and a recall of 81%, which are 5 and 8 percentage points higher than the HRNet network, respectively. On the COCO dataset, the AP is 71.9%, and the AR is 75.3%, which are 3 points and 2 points higher than HRNet, respectively. Extensive experiments show that our method has a remarkable improvement in the keypoint recognition tasks, and the recognition effect is better than the existing methods. Moreover, our method can be applied not only to keypoint detection but also to image classification and semantic segmentation with good generality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=keypoint%20detection" title="keypoint detection">keypoint detection</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20fusion" title=" feature fusion"> feature fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=attention" title=" attention"> attention</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20segmentation" title=" semantic segmentation"> semantic segmentation</a> </p> <a href="https://publications.waset.org/abstracts/147796/keypoint-detection-method-based-on-multi-scale-feature-fusion-of-attention-mechanism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147796.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">119</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4284</span> Detection of Keypoint in Press-Fit Curve Based on Convolutional Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shoujia%20Fang">Shoujia Fang</a>, <a href="https://publications.waset.org/abstracts/search?q=Guoqing%20Ding"> Guoqing Ding</a>, <a href="https://publications.waset.org/abstracts/search?q=Xin%20Chen"> Xin Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The quality of press-fit assembly is closely related to reliability and safety of product. The paper proposed a keypoint detection method based on convolutional neural network to improve the accuracy of keypoint detection in press-fit curve. It would provide an auxiliary basis for judging quality of press-fit assembly. The press-fit curve is a curve of press-fit force and displacement. Both force data and distance data are time-series data. Therefore, one-dimensional convolutional neural network is used to process the press-fit curve. After the obtained press-fit data is filtered, the multi-layer one-dimensional convolutional neural network is used to perform the automatic learning of press-fit curve features, and then sent to the multi-layer perceptron to finally output keypoint of the curve. We used the data of press-fit assembly equipment in the actual production process to train CNN model, and we used different data from the same equipment to evaluate the performance of detection. Compared with the existing research result, the performance of detection was significantly improved. This method can provide a reliable basis for the judgment of press-fit quality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=keypoint%20detection" title="keypoint detection">keypoint detection</a>, <a href="https://publications.waset.org/abstracts/search?q=curve%20feature" title=" curve feature"> curve feature</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=press-fit%20assembly" title=" press-fit assembly"> press-fit assembly</a> </p> <a href="https://publications.waset.org/abstracts/98263/detection-of-keypoint-in-press-fit-curve-based-on-convolutional-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98263.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">230</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4283</span> Multiperson Drone Control with Seamless Pilot Switching Using Onboard Camera and Openpose Real-Time Keypoint Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Evan%20Lowhorn">Evan Lowhorn</a>, <a href="https://publications.waset.org/abstracts/search?q=Rocio%20Alba-Flores"> Rocio Alba-Flores</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Traditional classification Convolutional Neural Networks (CNN) attempt to classify an image in its entirety. This becomes problematic when trying to perform classification with a drone’s camera in real-time due to unpredictable backgrounds. Object detectors with bounding boxes can be used to isolate individuals and other items, but the original backgrounds remain within these boxes. These basic detectors have been regularly used to determine what type of object an item is, such as “person” or “dog.” Recent advancement in computer vision, particularly with human imaging, is keypoint detection. Human keypoint detection goes beyond bounding boxes to fully isolate humans and plot points, or Regions of Interest (ROI), on their bodies within an image. ROIs can include shoulders, elbows, knees, heads, etc. These points can then be related to each other and used in deep learning methods such as pose estimation. For drone control based on human motions, poses, or signals using the onboard camera, it is important to have a simple method for pilot identification among multiple individuals while also giving the pilot fine control options for the drone. To achieve this, the OpenPose keypoint detection network was used with body and hand keypoint detection enabled. OpenPose supports the ability to combine multiple keypoint detection methods in real-time with a single network. Body keypoint detection allows simple poses to act as the pilot identifier. The hand keypoint detection with ROIs for each finger can then offer a greater variety of signal options for the pilot once identified. For this work, the individual must raise their non-control arm to be identified as the operator and send commands with the hand on their other arm. The drone ignores all other individuals in the onboard camera feed until the current operator lowers their non-control arm. When another individual wish to operate the drone, they simply raise their arm once the current operator relinquishes control, and then they can begin controlling the drone with their other hand. This is all performed mid-flight with no landing or script editing required. When using a desktop with a discrete NVIDIA GPU, the drone’s 2.4 GHz Wi-Fi connection combined with OpenPose restrictions to only body and hand allows this control method to perform as intended while maintaining the responsiveness required for practical use. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=drone%20control" title=" drone control"> drone control</a>, <a href="https://publications.waset.org/abstracts/search?q=keypoint%20detection" title=" keypoint detection"> keypoint detection</a>, <a href="https://publications.waset.org/abstracts/search?q=openpose" title=" openpose"> openpose</a> </p> <a href="https://publications.waset.org/abstracts/139752/multiperson-drone-control-with-seamless-pilot-switching-using-onboard-camera-and-openpose-real-time-keypoint-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/139752.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">184</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4282</span> Video Text Information Detection and Localization in Lecture Videos Using Moments </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Belkacem%20Soundes">Belkacem Soundes</a>, <a href="https://publications.waset.org/abstracts/search?q=Guezouli%20Larbi"> Guezouli Larbi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a robust and accurate method for text detection and localization over lecture videos. Frame regions are classified into text or background based on visual feature analysis. However, lecture video shows significant degradation mainly related to acquisition conditions, camera motion and environmental changes resulting in low quality videos. Hence, affecting feature extraction and description efficiency. Moreover, traditional text detection methods cannot be directly applied to lecture videos. Therefore, robust feature extraction methods dedicated to this specific video genre are required for robust and accurate text detection and extraction. Method consists of a three-step process: Slide region detection and segmentation; Feature extraction and non-text filtering. For robust and effective features extraction moment functions are used. Two distinct types of moments are used: orthogonal and non-orthogonal. For orthogonal Zernike Moments, both Pseudo Zernike moments are used, whereas for non-orthogonal ones Hu moments are used. Expressivity and description efficiency are given and discussed. Proposed approach shows that in general, orthogonal moments show high accuracy in comparison to the non-orthogonal one. Pseudo Zernike moments are more effective than Zernike with better computation time. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=text%20detection" title="text detection">text detection</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20localization" title=" text localization"> text localization</a>, <a href="https://publications.waset.org/abstracts/search?q=lecture%20videos" title=" lecture videos"> lecture videos</a>, <a href="https://publications.waset.org/abstracts/search?q=pseudo%20zernike%20moments" title=" pseudo zernike moments"> pseudo zernike moments</a> </p> <a href="https://publications.waset.org/abstracts/109549/video-text-information-detection-and-localization-in-lecture-videos-using-moments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/109549.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">152</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4281</span> The Relationship between Human Pose and Intention to Fire a Handgun</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joshua%20van%20Staden">Joshua van Staden</a>, <a href="https://publications.waset.org/abstracts/search?q=Dane%20Brown"> Dane Brown</a>, <a href="https://publications.waset.org/abstracts/search?q=Karen%20Bradshaw"> Karen Bradshaw</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Gun violence is a significant problem in modern-day society. Early detection of carried handguns through closed-circuit television (CCTV) can aid in preventing potential gun violence. However, CCTV operators have a limited attention span. Machine learning approaches to automating the detection of dangerous gun carriers provide a way to aid CCTV operators in identifying these individuals. This study provides insight into the relationship between human key points extracted using human pose estimation (HPE) and their intention to fire a weapon. We examine the feature importance of each keypoint and their correlations. We use principal component analysis (PCA) to reduce the feature space and optimize detection. Finally, we run a set of classifiers to determine what form of classifier performs well on this data. We find that hips, shoulders, and knees tend to be crucial aspects of the human pose when making these predictions. Furthermore, the horizontal position plays a larger role than the vertical position. Of the 66 key points, nine principal components could be used to make nonlinear classifications with 86% accuracy. Furthermore, linear classifications could be done with 85% accuracy, showing that there is a degree of linearity in the data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20engineering" title="feature engineering">feature engineering</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20pose" title=" human pose"> human pose</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=security" title=" security"> security</a> </p> <a href="https://publications.waset.org/abstracts/155235/the-relationship-between-human-pose-and-intention-to-fire-a-handgun" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155235.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4280</span> Standardized Description and Modeling Methods of Semiconductor IP Interfaces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seongsoo%20Lee">Seongsoo Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> IP reuse is an effective design methodology for modern SoC design to reduce effort and time. However, description and modeling methods of IP interfaces are different due to different IP designers. In this paper, standardized description and modeling methods of IP interfaces are proposed. It consists of 11 items such as IP information, model provision, data type, description level, interface information, port information, signal information, protocol information, modeling level, modeling information, and source file. The proposed description and modeling methods enables easy understanding, simulation, verification, and modification in IP reuse. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=interface" title="interface">interface</a>, <a href="https://publications.waset.org/abstracts/search?q=standardization" title=" standardization"> standardization</a>, <a href="https://publications.waset.org/abstracts/search?q=description" title=" description"> description</a>, <a href="https://publications.waset.org/abstracts/search?q=modeling" title=" modeling"> modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=semiconductor%20IP" title=" semiconductor IP"> semiconductor IP</a> </p> <a href="https://publications.waset.org/abstracts/16150/standardized-description-and-modeling-methods-of-semiconductor-ip-interfaces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16150.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">502</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4279</span> A Speeded up Robust Scale-Invariant Feature Transform Currency Recognition Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daliyah%20S.%20Aljutaili">Daliyah S. Aljutaili</a>, <a href="https://publications.waset.org/abstracts/search?q=Redna%20A.%20Almutlaq"> Redna A. Almutlaq</a>, <a href="https://publications.waset.org/abstracts/search?q=Suha%20A.%20Alharbi"> Suha A. Alharbi</a>, <a href="https://publications.waset.org/abstracts/search?q=Dina%20M.%20Ibrahim"> Dina M. Ibrahim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> All currencies around the world look very different from each other. For instance, the size, color, and pattern of the paper are different. With the development of modern banking services, automatic methods for paper currency recognition become important in many applications like vending machines. One of the currency recognition architecture&rsquo;s phases is Feature detection and description. There are many algorithms that are used for this phase, but they still have some disadvantages. This paper proposes a feature detection algorithm, which merges the advantages given in the current SIFT and SURF algorithms, which we call, Speeded up Robust Scale-Invariant Feature Transform (SR-SIFT) algorithm. Our proposed SR-SIFT algorithm overcomes the problems of both the SIFT and SURF algorithms. The proposed algorithm aims to speed up the SIFT feature detection algorithm and keep it robust. Simulation results demonstrate that the proposed SR-SIFT algorithm decreases the average response time, especially in small and minimum number of best key points, increases the distribution of the number of best key points on the surface of the currency. Furthermore, the proposed algorithm increases the accuracy of the true best point distribution inside the currency edge than the other two algorithms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=currency%20recognition" title="currency recognition">currency recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20detection%20and%20description" title=" feature detection and description"> feature detection and description</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT%20algorithm" title=" SIFT algorithm"> SIFT algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=SURF%20algorithm" title=" SURF algorithm"> SURF algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=speeded%20up%20and%20robust%20features" title=" speeded up and robust features"> speeded up and robust features</a> </p> <a href="https://publications.waset.org/abstracts/94315/a-speeded-up-robust-scale-invariant-feature-transform-currency-recognition-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/94315.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">235</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4278</span> Comparative Analysis of Edge Detection Techniques for Extracting Characters</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rana%20Gill">Rana Gill</a>, <a href="https://publications.waset.org/abstracts/search?q=Chandandeep%20Kaur"> Chandandeep Kaur </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Segmentation of images can be implemented using different fundamental algorithms like edge detection (discontinuity based segmentation), region growing (similarity based segmentation), iterative thresholding method. A comprehensive literature review relevant to the study gives description of different techniques for vehicle number plate detection and edge detection techniques widely used on different types of images. This research work is based on edge detection techniques and calculating threshold on the basis of five edge operators. Five operators used are Prewitt, Roberts, Sobel, LoG and Canny. Segmentation of characters present in different type of images like vehicle number plate, name plate of house and characters on different sign boards are selected as a case study in this work. The proposed methodology has seven stages. The proposed system has been implemented using MATLAB R2010a. Comparison of all the five operators has been done on the basis of their performance. From the results it is found that Canny operators produce best results among the used operators and performance of different edge operators in decreasing order is: Canny>Log>Sobel>Prewitt>Roberts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=segmentation" title="segmentation">segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=edge%20detection" title=" edge detection"> edge detection</a>, <a href="https://publications.waset.org/abstracts/search?q=text" title=" text"> text</a>, <a href="https://publications.waset.org/abstracts/search?q=extracting%20characters" title=" extracting characters"> extracting characters</a> </p> <a href="https://publications.waset.org/abstracts/9054/comparative-analysis-of-edge-detection-techniques-for-extracting-characters" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9054.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">426</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4277</span> Designing Space through Narratives: The Role of the Tour Description in the Architectural Design Process</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Papadopoulou">A. Papadopoulou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> When people are asked to provide an oral description of a space they usually provide a Tour description, which is a dynamic type of spatial narrative centered on the narrator’s body, rather than a Map description, which is a static type of spatial narrative focused on the organization of the space as seen from above. Also, subjects with training in the architecture discipline tend to adopt a Tour perspective of space when the narrative refers to a space they have actually experienced but tend to adopt a Map perspective when the narrative refers to a space they have merely imagined. This pilot study aims to investigate whether the Tour description, which is the most common mode in the oral descriptions of experienced space, is a cognitive perspective taken in the process of designing a space. The study investigates whether a spatial description provided by a subject with architecture training in the type of a Tour description would be accurately translated into a spatial layout by other subjects with architecture training. The subjects were given the Tour description in written form and were asked to make a plan drawing of the described space. The results demonstrate that when we conceive and design space we do not adopt the same rules and cognitive patterns that we adopt when we reconstruct space from our memory. As shown by the results of this pilot study, the rules that underlie the Tour description were not detected in the translation from narratives to drawings. In a different phase, the study also investigates how would subjects with architecture training describe space when forced to take a Tour perspective in their oral description of a space. The results of this second phase demonstrate that if intentionally taken, the Tour perspective leads to descriptions of space that are more detailed and focused on experiential aspects. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=architecture" title="architecture">architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=design%20process" title=" design process"> design process</a>, <a href="https://publications.waset.org/abstracts/search?q=embodied%20cognition" title=" embodied cognition"> embodied cognition</a>, <a href="https://publications.waset.org/abstracts/search?q=map%20description" title=" map description"> map description</a>, <a href="https://publications.waset.org/abstracts/search?q=oral%20narratives" title=" oral narratives"> oral narratives</a>, <a href="https://publications.waset.org/abstracts/search?q=tour%20description" title=" tour description"> tour description</a> </p> <a href="https://publications.waset.org/abstracts/103500/designing-space-through-narratives-the-role-of-the-tour-description-in-the-architectural-design-process" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/103500.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4276</span> Enhancement of Primary User Detection in Cognitive Radio by Scattering Transform</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Moawad">A. Moawad</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20C.%20Yao"> K. C. Yao</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Mansour"> A. Mansour</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Gautier"> R. Gautier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The detecting of an occupied frequency band is a major issue in cognitive radio systems. The detection process becomes difficult if the signal occupying the band of interest has faded amplitude due to multipath effects. These effects make it hard for an occupying user to be detected. This work mitigates the missed-detection problem in the context of cognitive radio in frequency-selective fading channel by proposing blind channel estimation method that is based on scattering transform. By initially applying conventional energy detection, the missed-detection probability is evaluated, and if it is greater than or equal to 50%, channel estimation is applied on the received signal followed by channel equalization to reduce the channel effects. In the proposed channel estimator, we modify the Morlet wavelet by using its first derivative for better frequency resolution. A mathematical description of the modified function and its frequency resolution is formulated in this work. The improved frequency resolution is required to follow the spectral variation of the channel. The channel estimation error is evaluated in the mean-square sense for different channel settings, and energy detection is applied to the equalized received signal. The simulation results show improvement in reducing the missed-detection probability as compared to the detection based on principal component analysis. This improvement is achieved at the expense of increased estimator complexity, which depends on the number of wavelet filters as related to the channel taps. Also, the detection performance shows an improvement in detection probability for low signal-to-noise scenarios over principal component analysis- based energy detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=channel%20estimation" title="channel estimation">channel estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=cognitive%20radio" title=" cognitive radio"> cognitive radio</a>, <a href="https://publications.waset.org/abstracts/search?q=scattering%20transform" title=" scattering transform"> scattering transform</a>, <a href="https://publications.waset.org/abstracts/search?q=spectrum%20sensing" title=" spectrum sensing"> spectrum sensing</a> </p> <a href="https://publications.waset.org/abstracts/79688/enhancement-of-primary-user-detection-in-cognitive-radio-by-scattering-transform" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79688.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">196</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4275</span> Hybrid Deep Learning and FAST-BRISK 3D Object Detection Technique for Bin-Picking Application</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Thanakrit%20Taweesoontorn">Thanakrit Taweesoontorn</a>, <a href="https://publications.waset.org/abstracts/search?q=Sarucha%20Yanyong"> Sarucha Yanyong</a>, <a href="https://publications.waset.org/abstracts/search?q=Poom%20Konghuayrob"> Poom Konghuayrob</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Robotic arms have gained popularity in various industries due to their accuracy and efficiency. This research proposes a method for bin-picking tasks using the Cobot, combining the YOLOv5 CNNs model for object detection and pose estimation with traditional feature detection (FAST), feature description (BRISK), and matching algorithms. By integrating these algorithms and utilizing a small-scale depth sensor camera for capturing depth and color images, the system achieves real-time object detection and accurate pose estimation, enabling the robotic arm to pick objects correctly in both position and orientation. Furthermore, the proposed method is implemented within the ROS framework to provide a seamless platform for robotic control and integration. This integration of robotics, cameras, and AI technology contributes to the development of industrial robotics, opening up new possibilities for automating challenging tasks and improving overall operational efficiency. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=robotic%20vision" title="robotic vision">robotic vision</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=applications%20of%20robotics" title=" applications of robotics"> applications of robotics</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20intelligent" title=" artificial intelligent"> artificial intelligent</a> </p> <a href="https://publications.waset.org/abstracts/176550/hybrid-deep-learning-and-fast-brisk-3d-object-detection-technique-for-bin-picking-application" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176550.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">97</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4274</span> An Autopilot System for Static Zone Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yanchun%20Zuo">Yanchun Zuo</a>, <a href="https://publications.waset.org/abstracts/search?q=Yingao%20Liu"> Yingao Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei%20Liu"> Wei Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Le%20Yu"> Le Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Run%20Huang"> Run Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Lixin%20Guo"> Lixin Guo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Electric field detection is important in many application scenarios. The traditional strategy is measuring the electric field with a man walking around in the area under test. This strategy cannot provide a satisfactory measurement accuracy. To solve the mentioned problem, an autopilot measurement system is divided. A mini-car is produced, which can travel in the area under test according to respect to the program within the CPU. The electric field measurement platform (EFMP) carries a central computer, two horn antennas, and a vector network analyzer. The mini-car stop at the sampling points according to the preset. When the car stops, the EFMP probes the electric field and stores data on the hard disk. After all the sampling points are traversed, an electric field map can be plotted. The proposed system can give an accurate field distribution description of the chamber. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autopilot%20mini-car%20measurement%20system" title="autopilot mini-car measurement system">autopilot mini-car measurement system</a>, <a href="https://publications.waset.org/abstracts/search?q=electric%20field%20detection" title=" electric field detection"> electric field detection</a>, <a href="https://publications.waset.org/abstracts/search?q=field%20map" title=" field map"> field map</a>, <a href="https://publications.waset.org/abstracts/search?q=static%20zone%20measurement" title=" static zone measurement"> static zone measurement</a> </p> <a href="https://publications.waset.org/abstracts/153711/an-autopilot-system-for-static-zone-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153711.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4273</span> Efficient Signal Detection Using QRD-M Based on Channel Condition in MIMO-OFDM System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jae-Jeong%20Kim">Jae-Jeong Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ki-Ro%20Kim"> Ki-Ro Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyoung-Kyu%20Song"> Hyoung-Kyu Song</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose an efficient signal detector that switches M parameter of QRD-M detection scheme is proposed for MIMO-OFDM system. The proposed detection scheme calculates the threshold by 1-norm condition number and then switches M parameter of QRD-M detection scheme according to channel information. If channel condition is bad, the parameter M is set to high value to increase the accuracy of detection. If channel condition is good, the parameter M is set to low value to reduce complexity of detection. Therefore, the proposed detection scheme has better trade off between BER performance and complexity than the conventional detection scheme. The simulation result shows that the complexity of proposed detection scheme is lower than QRD-M detection scheme with similar BER performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=MIMO-OFDM" title="MIMO-OFDM">MIMO-OFDM</a>, <a href="https://publications.waset.org/abstracts/search?q=QRD-M" title=" QRD-M"> QRD-M</a>, <a href="https://publications.waset.org/abstracts/search?q=channel%20condition" title=" channel condition"> channel condition</a>, <a href="https://publications.waset.org/abstracts/search?q=BER" title=" BER"> BER</a> </p> <a href="https://publications.waset.org/abstracts/3518/efficient-signal-detection-using-qrd-m-based-on-channel-condition-in-mimo-ofdm-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3518.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">370</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4272</span> Reduced Complexity of ML Detection Combined with DFE</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jae-Hyun%20Ro">Jae-Hyun Ro</a>, <a href="https://publications.waset.org/abstracts/search?q=Yong-Jun%20Kim"> Yong-Jun Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Chang-Bin%20Ha"> Chang-Bin Ha</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyoung-Kyu%20Song"> Hyoung-Kyu Song </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, many detection schemes have been developed to improve the error performance and to reduce the complexity. Maximum likelihood (ML) detection has optimal error performance but it has very high complexity. Thus, this paper proposes reduced complexity of ML detection combined with decision feedback equalizer (DFE). The error performance of the proposed detection scheme is higher than the conventional DFE. But the complexity of the proposed scheme is lower than the conventional ML detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=detection" title="detection">detection</a>, <a href="https://publications.waset.org/abstracts/search?q=DFE" title=" DFE"> DFE</a>, <a href="https://publications.waset.org/abstracts/search?q=MIMO-OFDM" title=" MIMO-OFDM"> MIMO-OFDM</a>, <a href="https://publications.waset.org/abstracts/search?q=ML" title=" ML"> ML</a> </p> <a href="https://publications.waset.org/abstracts/42215/reduced-complexity-of-ml-detection-combined-with-dfe" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42215.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">610</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4271</span> Grid Pattern Recognition and Suppression in Computed Radiographic Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Igor%20Belykh">Igor Belykh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Anti-scatter grids used in radiographic imaging for the contrast enhancement leave specific artifacts. Those artifacts may be visible or may cause Moiré effect when a digital image is resized on a diagnostic monitor. In this paper, we propose an automated grid artifacts detection and suppression algorithm which is still an actual problem. Grid artifacts detection is based on statistical approach in spatial domain. Grid artifacts suppression is based on Kaiser bandstop filter transfer function design and application avoiding ringing artifacts. Experimental results are discussed and concluded with description of advantages over existing approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=grid" title="grid">grid</a>, <a href="https://publications.waset.org/abstracts/search?q=computed%20radiography" title=" computed radiography"> computed radiography</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=filtering" title=" filtering"> filtering</a> </p> <a href="https://publications.waset.org/abstracts/7833/grid-pattern-recognition-and-suppression-in-computed-radiographic-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/7833.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">283</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4270</span> Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Shoiynbek">A. Shoiynbek</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20Kozhakhmet"> K. Kozhakhmet</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20Menezes"> P. Menezes</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Kuanyshbay"> D. Kuanyshbay</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Bayazitov"> D. Bayazitov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech emotion recognition has received increasing research interest all through current years. There was used emotional speech that was collected under controlled conditions in most research work. Actors imitating and artificially producing emotions in front of a microphone noted those records. There are four issues related to that approach, namely, (1) emotions are not natural, and it means that machines are learning to recognize fake emotions. (2) Emotions are very limited by quantity and poor in their variety of speaking. (3) There is language dependency on SER. (4) Consequently, each time when researchers want to start work with SER, they need to find a good emotional database on their language. In this paper, we propose the approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describe the sequence of actions of the proposed approach. One of the first objectives of the sequence of actions is a speech detection issue. The paper gives a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian languages. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To illustrate the working capacity of the developed model, we have performed an analysis of speech detection and extraction from real tasks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title="deep neural networks">deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20detection" title=" speech detection"> speech detection</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20emotion%20recognition" title=" speech emotion recognition"> speech emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Mel-frequency%20cepstrum%20coefficients" title=" Mel-frequency cepstrum coefficients"> Mel-frequency cepstrum coefficients</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20corpus" title=" collecting speech emotion corpus"> collecting speech emotion corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20dataset" title=" collecting speech emotion dataset"> collecting speech emotion dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset" title=" Kazakh speech dataset"> Kazakh speech dataset</a> </p> <a href="https://publications.waset.org/abstracts/152814/speech-detection-model-based-on-deep-neural-networks-classifier-for-speech-emotions-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152814.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4269</span> Moving Object Detection Using Histogram of Uniformly Oriented Gradient</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei-Jong%20Yang">Wei-Jong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu-Siang%20Su"> Yu-Siang Su</a>, <a href="https://publications.waset.org/abstracts/search?q=Pau-Choo%20Chung"> Pau-Choo Chung</a>, <a href="https://publications.waset.org/abstracts/search?q=Jar-Ferr%20Yang"> Jar-Ferr Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Moving object detection (MOD) is an important issue in advanced driver assistance systems (ADAS). There are two important moving objects, pedestrians and scooters in ADAS. In real-world systems, there exist two important challenges for MOD, including the computational complexity and the detection accuracy. The histogram of oriented gradient (HOG) features can easily detect the edge of object without invariance to changes in illumination and shadowing. However, to reduce the execution time for real-time systems, the image size should be down sampled which would lead the outlier influence to increase. For this reason, we propose the histogram of uniformly-oriented gradient (HUG) features to get better accurate description of the contour of human body. In the testing phase, the support vector machine (SVM) with linear kernel function is involved. Experimental results show the correctness and effectiveness of the proposed method. With SVM classifiers, the real testing results show the proposed HUG features achieve better than classification performance than the HOG ones. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=moving%20object%20detection" title="moving object detection">moving object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram%20of%20oriented%20gradient" title=" histogram of oriented gradient"> histogram of oriented gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram%20of%20uniformly-oriented%20gradient" title=" histogram of uniformly-oriented gradient"> histogram of uniformly-oriented gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=linear%20support%20vector%20machine" title=" linear support vector machine"> linear support vector machine</a> </p> <a href="https://publications.waset.org/abstracts/62854/moving-object-detection-using-histogram-of-uniformly-oriented-gradient" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62854.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">594</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4268</span> Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aisultan%20Shoiynbek">Aisultan Shoiynbek</a>, <a href="https://publications.waset.org/abstracts/search?q=Darkhan%20Kuanyshbay"> Darkhan Kuanyshbay</a>, <a href="https://publications.waset.org/abstracts/search?q=Paulo%20Menezes"> Paulo Menezes</a>, <a href="https://publications.waset.org/abstracts/search?q=Akbayan%20Bekarystankyzy"> Akbayan Bekarystankyzy</a>, <a href="https://publications.waset.org/abstracts/search?q=Assylbek%20Mukhametzhanov"> Assylbek Mukhametzhanov</a>, <a href="https://publications.waset.org/abstracts/search?q=Temirlan%20Shoiynbek"> Temirlan Shoiynbek</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Speech emotion recognition (SER) has received increasing research interest in recent years. It is a common practice to utilize emotional speech collected under controlled conditions recorded by actors imitating and artificially producing emotions in front of a microphone. There are four issues related to that approach: emotions are not natural, meaning that machines are learning to recognize fake emotions; emotions are very limited in quantity and poor in variety of speaking; there is some language dependency in SER; consequently, each time researchers want to start work with SER, they need to find a good emotional database in their language. This paper proposes an approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describes the sequence of actions involved in the proposed approach. One of the first objectives in the sequence of actions is the speech detection issue. The paper provides a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To investigate the working capacity of the developed model, an analysis of speech detection and extraction from real tasks has been performed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title="deep neural networks">deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20detection" title=" speech detection"> speech detection</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20emotion%20recognition" title=" speech emotion recognition"> speech emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Mel-frequency%20cepstrum%20coefficients" title=" Mel-frequency cepstrum coefficients"> Mel-frequency cepstrum coefficients</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20corpus" title=" collecting speech emotion corpus"> collecting speech emotion corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=collecting%20speech%20emotion%20dataset" title=" collecting speech emotion dataset"> collecting speech emotion dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=Kazakh%20speech%20dataset" title=" Kazakh speech dataset"> Kazakh speech dataset</a> </p> <a href="https://publications.waset.org/abstracts/189328/speech-detection-model-based-on-deep-neural-networks-classifier-for-speech-emotions-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/189328.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">26</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4267</span> Using Vulnerability to Reduce False Positive Rate in Intrusion Detection Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nadjah%20Chergui">Nadjah Chergui</a>, <a href="https://publications.waset.org/abstracts/search?q=Narhimene%20Boustia"> Narhimene Boustia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Intrusion Detection Systems are an essential tool for network security infrastructure. However, IDSs have a serious problem which is the generating of massive number of alerts, most of them are false positive ones which can hide true alerts and make the analyst confused to analyze the right alerts for report the true attacks. The purpose behind this paper is to present a formalism model to perform correlation engine by the reduction of false positive alerts basing on vulnerability contextual information. For that, we propose a formalism model based on non-monotonic JClassicδє description logic augmented with a default (δ) and an exception (є) operator that allows a dynamic inference according to contextual information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=context" title="context">context</a>, <a href="https://publications.waset.org/abstracts/search?q=default" title=" default"> default</a>, <a href="https://publications.waset.org/abstracts/search?q=exception" title=" exception"> exception</a>, <a href="https://publications.waset.org/abstracts/search?q=vulnerability" title=" vulnerability"> vulnerability</a> </p> <a href="https://publications.waset.org/abstracts/46511/using-vulnerability-to-reduce-false-positive-rate-in-intrusion-detection-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46511.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">259</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4266</span> Cigarette Smoke Detection Based on YOLOV3</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei%20Li">Wei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Tuo%20Yang"> Tuo Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to satisfy the real-time and accurate requirements of cigarette smoke detection in complex scenes, a cigarette smoke detection technology based on the combination of deep learning and color features was proposed. Firstly, based on the color features of cigarette smoke, the suspicious cigarette smoke area in the image is extracted. Secondly, combined with the efficiency of cigarette smoke detection and the problem of network overfitting, a network model for cigarette smoke detection was designed according to YOLOV3 algorithm to reduce the false detection rate. The experimental results show that the method is feasible and effective, and the accuracy of cigarette smoke detection is up to 99.13%, which satisfies the requirements of real-time cigarette smoke detection in complex scenes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=cigarette%20smoke%20detection" title=" cigarette smoke detection"> cigarette smoke detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV3" title=" YOLOV3"> YOLOV3</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction" title=" color feature extraction"> color feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/159151/cigarette-smoke-detection-based-on-yolov3" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159151.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">87</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4265</span> An Architecture for New Generation of Distributed Intrusion Detection System Based on Preventive Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.%20Benmoussa">H. Benmoussa</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20A.%20El%20Kalam"> A. A. El Kalam</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Ait%20Ouahman"> A. Ait Ouahman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The design and implementation of intrusion detection systems (IDS) remain an important area of research in the security of information systems. Despite the importance and reputation of the current intrusion detection systems, their efficiency and effectiveness remain limited as they should include active defense approach to allow anticipating and predicting intrusions before their occurrence. Consequently, they must be readapted. For this purpose we suggest a new generation of distributed intrusion detection system based on preventive detection approach and using intelligent and mobile agents. Our architecture benefits from mobile agent features and addresses some of the issues with centralized and hierarchical models. Also, it presents advantages in terms of increasing scalability and flexibility. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Intrusion%20Detection%20System%20%28IDS%29" title="Intrusion Detection System (IDS)">Intrusion Detection System (IDS)</a>, <a href="https://publications.waset.org/abstracts/search?q=preventive%20detection" title=" preventive detection"> preventive detection</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20agents" title=" mobile agents"> mobile agents</a>, <a href="https://publications.waset.org/abstracts/search?q=distributed%20architecture" title=" distributed architecture"> distributed architecture</a> </p> <a href="https://publications.waset.org/abstracts/18239/an-architecture-for-new-generation-of-distributed-intrusion-detection-system-based-on-preventive-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18239.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">583</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4264</span> Video Based Ambient Smoke Detection By Detecting Directional Contrast Decrease</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Omair%20Ghori">Omair Ghori</a>, <a href="https://publications.waset.org/abstracts/search?q=Anton%20Stadler"> Anton Stadler</a>, <a href="https://publications.waset.org/abstracts/search?q=Stefan%20Wilk"> Stefan Wilk</a>, <a href="https://publications.waset.org/abstracts/search?q=Wolfgang%20Effelsberg"> Wolfgang Effelsberg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fire-related incidents account for extensive loss of life and material damage. Quick and reliable detection of occurring fires has high real world implications. Whereas a major research focus lies on the detection of outdoor fires, indoor camera-based fire detection is still an open issue. Cameras in combination with computer vision helps to detect flames and smoke more quickly than conventional fire detectors. In this work, we present a computer vision-based smoke detection algorithm based on contrast changes and a multi-step classification. This work accelerates computer vision-based fire detection considerably in comparison with classical indoor-fire detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=contrast%20analysis" title="contrast analysis">contrast analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=early%20fire%20detection" title=" early fire detection"> early fire detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20smoke%20detection" title=" video smoke detection"> video smoke detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/52006/video-based-ambient-smoke-detection-by-detecting-directional-contrast-decrease" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52006.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4263</span> Performance Degradation for the GLR Test-Statistics for Spatial Signal Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Olesya%20Bolkhovskaya">Olesya Bolkhovskaya</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexander%20Maltsev"> Alexander Maltsev</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Antenna arrays are widely used in modern radio systems in sonar and communications. The solving of the detection problems of a useful signal on the background of noise is based on the GLRT method. There is a large number of problem which depends on the known a priori information. In this work, in contrast to the majority of already solved problems, it is used only difference spatial properties of the signal and noise for detection. We are analyzing the influence of the degree of non-coherence of signal and noise unhomogeneity on the performance characteristics of different GLRT statistics. The description of the signal and noise is carried out by means of the spatial covariance matrices C in the cases of different number of known information. The partially coherent signal is simulated as a plane wave with a random angle of incidence of the wave concerning a normal. Background noise is simulated as random process with uniform distribution function in each element. The results of investigation of degradation of performance characteristics for different cases are represented in this work. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GLRT" title="GLRT">GLRT</a>, <a href="https://publications.waset.org/abstracts/search?q=Neumann-Pearson%E2%80%99s%20criterion" title=" Neumann-Pearson’s criterion"> Neumann-Pearson’s criterion</a>, <a href="https://publications.waset.org/abstracts/search?q=Test-statistics" title=" Test-statistics"> Test-statistics</a>, <a href="https://publications.waset.org/abstracts/search?q=degradation" title=" degradation"> degradation</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20processing" title=" spatial processing"> spatial processing</a>, <a href="https://publications.waset.org/abstracts/search?q=multielement%20antenna%20array" title=" multielement antenna array"> multielement antenna array</a> </p> <a href="https://publications.waset.org/abstracts/1985/performance-degradation-for-the-glr-test-statistics-for-spatial-signal-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1985.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">385</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4262</span> A Contribution to the Polynomial Eigen Problem</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Malika%20Yaici">Malika Yaici</a>, <a href="https://publications.waset.org/abstracts/search?q=Kamel%20Hariche"> Kamel Hariche</a>, <a href="https://publications.waset.org/abstracts/search?q=Tim%20Clarke"> Tim Clarke</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The relationship between eigenstructure (eigenvalues and eigenvectors) and latent structure (latent roots and latent vectors) is established. In control theory eigenstructure is associated with the state space description of a dynamic multi-variable system and a latent structure is associated with its matrix fraction description. Beginning with block controller and block observer state space forms and moving on to any general state space form, we develop the identities that relate eigenvectors and latent vectors in either direction. Numerical examples illustrate this result. A brief discussion of the potential of these identities in linear control system design follows. Additionally, we present a consequent result: a quick and easy method to solve the polynomial eigenvalue problem for regular matrix polynomials. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=eigenvalues%2Feigenvectors" title="eigenvalues/eigenvectors">eigenvalues/eigenvectors</a>, <a href="https://publications.waset.org/abstracts/search?q=latent%20values%2Fvectors" title=" latent values/vectors"> latent values/vectors</a>, <a href="https://publications.waset.org/abstracts/search?q=matrix%20fraction%20description" title=" matrix fraction description"> matrix fraction description</a>, <a href="https://publications.waset.org/abstracts/search?q=state%20space%20description" title=" state space description "> state space description </a> </p> <a href="https://publications.waset.org/abstracts/14247/a-contribution-to-the-polynomial-eigen-problem" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14247.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">470</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4261</span> Intrusion Detection Techniques in NaaS in the Cloud: A Review </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rashid%20Mahmood">Rashid Mahmood</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The network as a service (NaaS) usage has been well-known from the last few years in the many applications, like mission critical applications. In the NaaS, prevention method is not adequate as the security concerned, so the detection method should be added to the security issues in NaaS. The authentication and encryption are considered the first solution of the NaaS problem whereas now these are not sufficient as NaaS use is increasing. In this paper, we are going to present the concept of intrusion detection and then survey some of major intrusion detection techniques in NaaS and aim to compare in some important fields. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=IDS" title="IDS">IDS</a>, <a href="https://publications.waset.org/abstracts/search?q=cloud" title=" cloud"> cloud</a>, <a href="https://publications.waset.org/abstracts/search?q=naas" title=" naas"> naas</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a> </p> <a href="https://publications.waset.org/abstracts/36475/intrusion-detection-techniques-in-naas-in-the-cloud-a-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36475.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4260</span> Securing Web Servers by the Intrusion Detection System (IDS)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousef%20Farhaoui">Yousef Farhaoui </a> </p> <p class="card-text"><strong>Abstract:</strong></p> An IDS is a tool which is used to improve the level of security. We present in this paper different architectures of IDS. We will also discuss measures that define the effectiveness of IDS and the very recent works of standardization and homogenization of IDS. At the end, we propose a new model of IDS called BiIDS (IDS Based on the two principles of detection) for securing web servers and applications by the Intrusion Detection System (IDS). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intrusion%20detection" title="intrusion detection">intrusion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=architectures" title=" architectures"> architectures</a>, <a href="https://publications.waset.org/abstracts/search?q=characteristic" title=" characteristic"> characteristic</a>, <a href="https://publications.waset.org/abstracts/search?q=tools" title=" tools"> tools</a>, <a href="https://publications.waset.org/abstracts/search?q=security" title=" security"> security</a>, <a href="https://publications.waset.org/abstracts/search?q=web%20server" title=" web server"> web server</a> </p> <a href="https://publications.waset.org/abstracts/13346/securing-web-servers-by-the-intrusion-detection-system-ids" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13346.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">418</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4259</span> Suggestion for Malware Detection Agent Considering Network Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ji-Hoon%20Hong">Ji-Hoon Hong</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong-Hee%20Kim"> Dong-Hee Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nam-Uk%20Kim"> Nam-Uk Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Tai-Myoung%20Chung"> Tai-Myoung Chung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Smartphone users are increasing rapidly. Accordingly, many companies are running BYOD (Bring Your Own Device: Policies to bring private-smartphones to the company) policy to increase work efficiency. However, smartphones are always under the threat of malware, thus the company network that is connected smartphone is exposed to serious risks. Most smartphone malware detection techniques are to perform an independent detection (perform the detection of a single target application). In this paper, we analyzed a variety of intrusion detection techniques. Based on the results of analysis propose an agent using the network IDS. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=android%20malware%20detection" title="android malware detection">android malware detection</a>, <a href="https://publications.waset.org/abstracts/search?q=software-defined%20network" title=" software-defined network"> software-defined network</a>, <a href="https://publications.waset.org/abstracts/search?q=interaction%20environment" title=" interaction environment"> interaction environment</a>, <a href="https://publications.waset.org/abstracts/search?q=android%20malware%20detection" title=" android malware detection"> android malware detection</a>, <a href="https://publications.waset.org/abstracts/search?q=software-defined%20network" title=" software-defined network"> software-defined network</a>, <a href="https://publications.waset.org/abstracts/search?q=interaction%20environment" title=" interaction environment"> interaction environment</a> </p> <a href="https://publications.waset.org/abstracts/39330/suggestion-for-malware-detection-agent-considering-network-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39330.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">433</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4258</span> Improved Skin Detection Using Colour Space and Texture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Medjram%20Sofiane">Medjram Sofiane</a>, <a href="https://publications.waset.org/abstracts/search?q=Babahenini%20Mohamed%20Chaouki"> Babahenini Mohamed Chaouki</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Benali%20Yamina"> Mohamed Benali Yamina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Skin detection is an important task for computer vision systems. A good method for skin detection means a good and successful result of the system. The colour is a good descriptor that allows us to detect skin colour in the images, but because of lightings effects and objects that have a similar colour skin, skin detection becomes difficult. In this paper, we proposed a method using the YCbCr colour space for skin detection and lighting effects elimination, then we use the information of texture to eliminate the false regions detected by the YCbCr colour skin model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=skin%20detection" title="skin detection">skin detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YCbCr" title=" YCbCr"> YCbCr</a>, <a href="https://publications.waset.org/abstracts/search?q=GLCM" title=" GLCM"> GLCM</a>, <a href="https://publications.waset.org/abstracts/search?q=texture" title=" texture"> texture</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20skin" title=" human skin"> human skin</a> </p> <a href="https://publications.waset.org/abstracts/19039/improved-skin-detection-using-colour-space-and-texture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19039.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">459</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4257</span> Effective Editable Emoticon Description Schema for Mobile Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiwon%20Lee">Jiwon Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Si-hwan%20Jang"> Si-hwan Jang</a>, <a href="https://publications.waset.org/abstracts/search?q=Sanghyun%20Joo"> Sanghyun Joo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The popularity of emoticons are on the rise since the mobile messengers are generalized. At the same time, few problems of emoticons are also occurred due to innate characteristics of emoticons. Too many emoticons make difficult people to select one which is well-suited for user's intention. On the contrary to this, sometimes user cannot find the emoticon which expresses user's exact intention. Poor information delivery of emoticon is another problem due to a major part of current emoticons are focused on emotion delivery. In this situation, we propose a new concept of emoticons, editable emoticons, to solve above drawbacks of emoticons. User can edit the components inside the proposed editable emoticon and send it to express his exact intention. By doing so, the number of editable emoticons can be maintained reasonable, and it can express user's exact intention. Further, editable emoticons can be used as information deliverer according to user's intention and editing skills. In this paper, we propose the concept of editable emoticons and schema based editable emoticon description method. The proposed description method is 200 times superior to the compared screen capturing method in the view of transmission bandwidth. Further, the description method is designed to have compatibility since it follows MPEG-UD international standard. The proposed editable emoticons can be exploited not only mobile applications, but also various fields such as education and medical field. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=description%20schema" title="description schema">description schema</a>, <a href="https://publications.waset.org/abstracts/search?q=editable%20emoticon" title=" editable emoticon"> editable emoticon</a>, <a href="https://publications.waset.org/abstracts/search?q=emoticon%20transmission" title=" emoticon transmission"> emoticon transmission</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20applications" title=" mobile applications"> mobile applications</a> </p> <a href="https://publications.waset.org/abstracts/15456/effective-editable-emoticon-description-schema-for-mobile-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15456.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">297</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4256</span> Real-Time Detection of Space Manipulator Self-Collision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhang%20Xiaodong">Zhang Xiaodong</a>, <a href="https://publications.waset.org/abstracts/search?q=Tang%20Zixin"> Tang Zixin</a>, <a href="https://publications.waset.org/abstracts/search?q=Liu%20Xin"> Liu Xin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to avoid self-collision of space manipulators during operation process, a real-time detection method is proposed in this paper. The manipulator is fitted into a cylinder enveloping surface, and then the detection algorithm of collision between cylinders is analyzed. The collision model of space manipulator self-links can be detected by using this algorithm in real-time detection during the operation process. To ensure security of the operation, a safety threshold is designed. The simulation and experiment results verify the effectiveness of the proposed algorithm for a 7-DOF space manipulator. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=space%20manipulator" title="space manipulator">space manipulator</a>, <a href="https://publications.waset.org/abstracts/search?q=collision%20detection" title=" collision detection"> collision detection</a>, <a href="https://publications.waset.org/abstracts/search?q=self-collision" title=" self-collision"> self-collision</a>, <a href="https://publications.waset.org/abstracts/search?q=the%20real-time%20collision%20detection" title=" the real-time collision detection"> the real-time collision detection</a> </p> <a href="https://publications.waset.org/abstracts/23258/real-time-detection-of-space-manipulator-self-collision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23258.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">469</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=key-point%20detection%20and%20description&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=key-point%20detection%20and%20description&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=key-point%20detection%20and%20description&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=key-point%20detection%20and%20description&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=key-point%20detection%20and%20description&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=key-point%20detection%20and%20description&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=key-point%20detection%20and%20description&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=key-point%20detection%20and%20description&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=key-point%20detection%20and%20description&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=key-point%20detection%20and%20description&amp;page=142">142</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=key-point%20detection%20and%20description&amp;page=143">143</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=key-point%20detection%20and%20description&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10