CINXE.COM

Search results for: landmark detection

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: landmark detection</title> <meta name="description" content="Search results for: landmark detection"> <meta name="keywords" content="landmark detection"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="landmark detection" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="landmark detection"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3520</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: landmark detection</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3520</span> Mutiple Medical Landmark Detection on X-Ray Scan Using Reinforcement Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vijaya%20Yuvaram%20Singh%20V%20M">Vijaya Yuvaram Singh V M</a>, <a href="https://publications.waset.org/abstracts/search?q=Kameshwar%20Rao%20J%20V"> Kameshwar Rao J V</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The challenge with development of neural network based methods for medical is the availability of data. Anatomical landmark detection in the medical domain is a process to find points on the x-ray scan report of the patient. Most of the time this task is done manually by trained professionals as it requires precision and domain knowledge. Traditionally object detection based methods are used for landmark detection. Here, we utilize reinforcement learning and query based method to train a single agent capable of detecting multiple landmarks. A deep Q network agent is trained to detect single and multiple landmarks present on hip and shoulder from x-ray scan of a patient. Here a single agent is trained to find multiple landmark making it superior to having individual agents per landmark. For the initial study, five images of different patients are used as the environment and tested the agents performance on two unseen images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=reinforcement%20learning" title="reinforcement learning">reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20landmark%20detection" title=" medical landmark detection"> medical landmark detection</a>, <a href="https://publications.waset.org/abstracts/search?q=multi%20target%20detection" title=" multi target detection"> multi target detection</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20network" title=" deep neural network"> deep neural network</a> </p> <a href="https://publications.waset.org/abstracts/127710/mutiple-medical-landmark-detection-on-x-ray-scan-using-reinforcement-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127710.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">142</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3519</span> Automatic Landmark Selection Based on Feature Clustering for Visual Autonomous Unmanned Aerial Vehicle Navigation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Paulo%20Fernando%20Silva%20Filho">Paulo Fernando Silva Filho</a>, <a href="https://publications.waset.org/abstracts/search?q=Elcio%20Hideiti%20Shiguemori"> Elcio Hideiti Shiguemori</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The selection of specific landmarks for an Unmanned Aerial Vehicles&rsquo; Visual Navigation systems based on Automatic Landmark Recognition has significant influence on the precision of the system&rsquo;s estimated position. At the same time, manual selection of the landmarks does not guarantee a high recognition rate, which would also result on a poor precision. This work aims to develop an automatic landmark selection that will take the image of the flight area and identify the best landmarks to be recognized by the Visual Navigation Landmark Recognition System. The criterion to select a landmark is based on features detected by ORB or AKAZE and edges information on each possible landmark. Results have shown that disposition of possible landmarks is quite different from the human perception. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=clustering" title="clustering">clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=edges" title=" edges"> edges</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20points" title=" feature points"> feature points</a>, <a href="https://publications.waset.org/abstracts/search?q=landmark%20selection" title=" landmark selection"> landmark selection</a>, <a href="https://publications.waset.org/abstracts/search?q=X-means" title=" X-means"> X-means</a> </p> <a href="https://publications.waset.org/abstracts/91173/automatic-landmark-selection-based-on-feature-clustering-for-visual-autonomous-unmanned-aerial-vehicle-navigation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91173.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">279</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3518</span> Remembering Route in an Unfamiliar Homogenous Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Sameer">Ahmed Sameer</a>, <a href="https://publications.waset.org/abstracts/search?q=Braj%20Bhushan"> Braj Bhushan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of our study was to compare two techniques (no landmark vs imaginary landmark) of remembering route while traversing in an unfamiliar homogenous environment. We used two videos each having nine identical turns with no landmarks. In the first video participant was required to remember the sequence of turns. In the second video participant was required to imagine a landmark at each turn and associate the turn with it. In both the task the participant was asked to recall the sequence of turns as it appeared in the video. Results showed that performance in the first condition i.e. without use of landmarks was better than imaginary landmark condition. The difference, however, became significant when the participant were tested again about 30 minutes later though performance was still better in no-landmark condition. The finding is surprising given the past research in memory and is explained in terms of cognitive factors such as mental workload. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=wayfinding" title="wayfinding">wayfinding</a>, <a href="https://publications.waset.org/abstracts/search?q=landmarks" title=" landmarks"> landmarks</a>, <a href="https://publications.waset.org/abstracts/search?q=unfamiliar%20environment" title=" unfamiliar environment"> unfamiliar environment</a>, <a href="https://publications.waset.org/abstracts/search?q=cognitive%20psychology" title=" cognitive psychology"> cognitive psychology</a> </p> <a href="https://publications.waset.org/abstracts/25660/remembering-route-in-an-unfamiliar-homogenous-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25660.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">476</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3517</span> Italian Speech Vowels Landmark Detection through the Legacy Tool &#039;xkl&#039; with Integration of Combined CNNs and RNNs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kaleem%20Kashif">Kaleem Kashif</a>, <a href="https://publications.waset.org/abstracts/search?q=Tayyaba%20Anam"> Tayyaba Anam</a>, <a href="https://publications.waset.org/abstracts/search?q=Yizhi%20Wu"> Yizhi Wu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=landmark%20detection" title="landmark detection">landmark detection</a>, <a href="https://publications.waset.org/abstracts/search?q=acoustic%20analysis" title=" acoustic analysis"> acoustic analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=recurrent%20neural%20network" title=" recurrent neural network"> recurrent neural network</a> </p> <a href="https://publications.waset.org/abstracts/184529/italian-speech-vowels-landmark-detection-through-the-legacy-tool-xkl-with-integration-of-combined-cnns-and-rnns" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/184529.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">63</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3516</span> Real-Time Fitness Monitoring with MediaPipe</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chandra%20Prayaga">Chandra Prayaga</a>, <a href="https://publications.waset.org/abstracts/search?q=Lakshmi%20Prayaga"> Lakshmi Prayaga</a>, <a href="https://publications.waset.org/abstracts/search?q=Aaron%20Wade"> Aaron Wade</a>, <a href="https://publications.waset.org/abstracts/search?q=Kyle%20Rank"> Kyle Rank</a>, <a href="https://publications.waset.org/abstracts/search?q=Gopi%20Shankar%20Mallu"> Gopi Shankar Mallu</a>, <a href="https://publications.waset.org/abstracts/search?q=Sri%20Satya"> Sri Satya</a>, <a href="https://publications.waset.org/abstracts/search?q=Harsha%20Pola"> Harsha Pola</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In today's tech-driven world, where connectivity shapes our daily lives, maintaining physical and emotional health is crucial. Athletic trainers play a vital role in optimizing athletes' performance and preventing injuries. However, a shortage of trainers impacts the quality of care. This study introduces a vision-based exercise monitoring system leveraging Google's MediaPipe library for precise tracking of bicep curl exercises and simultaneous posture monitoring. We propose a three-stage methodology: landmark detection, side detection, and angle computation. Our system calculates angles at the elbow, wrist, neck, and torso to assess exercise form. Experimental results demonstrate the system's effectiveness in distinguishing between good and partial repetitions and evaluating body posture during exercises, providing real-time feedback for precise fitness monitoring. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=physical%20health" title="physical health">physical health</a>, <a href="https://publications.waset.org/abstracts/search?q=athletic%20trainers" title=" athletic trainers"> athletic trainers</a>, <a href="https://publications.waset.org/abstracts/search?q=fitness%20monitoring" title=" fitness monitoring"> fitness monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=technology%20driven%20solutions" title=" technology driven solutions"> technology driven solutions</a>, <a href="https://publications.waset.org/abstracts/search?q=Google%E2%80%99s%20MediaPipe" title=" Google’s MediaPipe"> Google’s MediaPipe</a>, <a href="https://publications.waset.org/abstracts/search?q=landmark%20detection" title=" landmark detection"> landmark detection</a>, <a href="https://publications.waset.org/abstracts/search?q=angle%20computation" title=" angle computation"> angle computation</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20feedback" title=" real-time feedback"> real-time feedback</a> </p> <a href="https://publications.waset.org/abstracts/183020/real-time-fitness-monitoring-with-mediapipe" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183020.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">66</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3515</span> Strabismus Detection Using Eye Alignment Stability</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anoop%20T.%20R.">Anoop T. R.</a>, <a href="https://publications.waset.org/abstracts/search?q=Otman%20Basir"> Otman Basir</a>, <a href="https://publications.waset.org/abstracts/search?q=Robert%20F.%20Hess"> Robert F. Hess</a>, <a href="https://publications.waset.org/abstracts/search?q=Ben%20Thompson"> Ben Thompson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. Currently, many children with strabismus remain undiagnosed until school entry because current automated screening methods have limited success in the preschool age range. A method for strabismus detection using eye alignment stability (EAS) is proposed. This method starts with face detection, followed by facial landmark detection, eye region segmentation, eye gaze extraction, and eye alignment stability estimation. Binarization and morphological operations are performed for segmenting the pupil region from the eye. After finding the EAS, its absolute value is used to differentiate the strabismic eye from the non-strabismic eye. If the value of the eye alignment stability is greater than a particular threshold, then the eyes are misaligned, and if its value is less than the threshold, the eyes are aligned. The method was tested on 175 strabismic and non-strabismic images obtained from Kaggle and Google Photos. The strabismic eye is taken as a positive class, and the non-strabismic eye is taken as a negative class. The test produced a true positive rate of 100% and a false positive rate of 7.69%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=strabismus" title="strabismus">strabismus</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20landmarks" title=" facial landmarks"> facial landmarks</a>, <a href="https://publications.waset.org/abstracts/search?q=eye%20segmentation" title=" eye segmentation"> eye segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=eye%20gaze" title=" eye gaze"> eye gaze</a>, <a href="https://publications.waset.org/abstracts/search?q=binarization" title=" binarization"> binarization</a> </p> <a href="https://publications.waset.org/abstracts/177646/strabismus-detection-using-eye-alignment-stability" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/177646.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3514</span> The Twelfth Rib as a Landmark for Surgery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jake%20Tempo">Jake Tempo</a>, <a href="https://publications.waset.org/abstracts/search?q=Georgina%20Williams"> Georgina Williams</a>, <a href="https://publications.waset.org/abstracts/search?q=Iain%20Robertson"> Iain Robertson</a>, <a href="https://publications.waset.org/abstracts/search?q=Claire%20Pascoe"> Claire Pascoe</a>, <a href="https://publications.waset.org/abstracts/search?q=Darren%20Rama"> Darren Rama</a>, <a href="https://publications.waset.org/abstracts/search?q=Richard%20Cetti"> Richard Cetti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: The twelfth rib is commonly used as a landmark for surgery; however, its variability in length has not been formally studied. The highly variable rib length provides a challenge for urologists seeking a consistent landmark for percutaneous nephrolithotomy and retroperitoneoscopic surgery. Methods and materials: We analysed CT scans of 100 adults who had imaging between 23rd March and twelfth April 2020 at an Australian Hospital. We measured the distance from the mid-sagittal line to the twelfth rib tip in the axial plane as a surrogate for true rib length. We also measured the distance from the twelfth rib tip to the kidney, spleen, and liver. Results: Length from the mid-sagittal line to the right twelfth rib tip varied from 46 (percentile 95%CI 40 to 57) to 136mm (percentile 95%CI 133 to 138). On the left, the distances varied from 55 (percentile 95%CI 50 to 64) to 134mm (percentile 95%CI 131 to 135). Twenty-three percent of people had an organ lying between the tip of the twelfth rib and the kidney on the right, and 11% of people had the same finding on the left. Conclusion: The twelfth rib is highly variable in its length. Similar variability was recorded in the distance from the tip to intra-abdominal organs. Due to the frequency of organs lying between the tip of the rib and the kidney, it should not be used as a landmark for accessing the kidney without prior knowledge of an individual patient’s anatomy, as seen on imaging. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=PCNL" title="PCNL">PCNL</a>, <a href="https://publications.waset.org/abstracts/search?q=rib" title=" rib"> rib</a>, <a href="https://publications.waset.org/abstracts/search?q=anatomy" title=" anatomy"> anatomy</a>, <a href="https://publications.waset.org/abstracts/search?q=nephrolithotomy" title=" nephrolithotomy"> nephrolithotomy</a> </p> <a href="https://publications.waset.org/abstracts/145162/the-twelfth-rib-as-a-landmark-for-surgery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/145162.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">115</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3513</span> Facial Recognition and Landmark Detection in Fitness Assessment and Performance Improvement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Brittany%20Richardson">Brittany Richardson</a>, <a href="https://publications.waset.org/abstracts/search?q=Ying%20Wang"> Ying Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> For physical therapy, exercise prescription, athlete training, and regular fitness training, it is crucial to perform health assessments or fitness assessments periodically. An accurate assessment is propitious for tracking recovery progress, preventing potential injury and making long-range training plans. Assessments include necessary measurements, height, weight, blood pressure, heart rate, body fat, etc. and advanced evaluation, muscle group strength, stability-mobility, and movement evaluation, etc. In the current standard assessment procedures, the accuracy of assessments, especially advanced evaluations, largely depends on the experience of physicians, coaches, and personal trainers. And it is challenging to track clients’ progress in the current assessment. Unlike the tradition assessment, in this paper, we present a deep learning based face recognition algorithm for accurate, comprehensive and trackable assessment. Based on the result from our assessment, physicians, coaches, and personal trainers are able to adjust the training targets and methods. The system categorizes the difficulty levels of the current activity for the client or user, furthermore make more comprehensive assessments based on tracking muscle group over time using a designed landmark detection method. The system also includes the function of grading and correcting the form of the clients during exercise. Experienced coaches and personal trainer can tell the clients' limit based on their facial expression and muscle group movements, even during the first several sessions. Similar to this, using a convolution neural network, the system is trained with people’s facial expression to differentiate challenge levels for clients. It uses landmark detection for subtle changes in muscle groups movements. It measures the proximal mobility of the hips and thoracic spine, the proximal stability of the scapulothoracic region and distal mobility of the glenohumeral joint, as well as distal mobility, and its effect on the kinetic chain. This system integrates data from other fitness assistant devices, including but not limited to Apple Watch, Fitbit, etc. for a improved training and testing performance. The system itself doesn’t require history data for an individual client, but the history data of a client can be used to create a more effective exercise plan. In order to validate the performance of the proposed work, an experimental design is presented. The results show that the proposed work contributes towards improving the quality of exercise plan, execution, progress tracking, and performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=exercise%20prescription" title="exercise prescription">exercise prescription</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20recognition" title=" facial recognition"> facial recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=landmark%20detection" title=" landmark detection"> landmark detection</a>, <a href="https://publications.waset.org/abstracts/search?q=fitness%20assessments" title=" fitness assessments"> fitness assessments</a> </p> <a href="https://publications.waset.org/abstracts/98573/facial-recognition-and-landmark-detection-in-fitness-assessment-and-performance-improvement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/98573.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3512</span> Wayfinding Strategies in an Unfamiliar Homogenous Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahemd%20Sameer">Ahemd Sameer</a>, <a href="https://publications.waset.org/abstracts/search?q=Braj%20Bhushan"> Braj Bhushan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objective of our study was to compare wayfinding strategies to remember route while navigation in an unfamiliar homogenous environment. Two videos developed using free ware Trimble Sketchup© each having nine identical turns (3 right, 3 left, 3 straight) with no distinguishing feature at any turn. Thirt-two male post-graduate students of IIT Kanpur participated in the study. The experiment was conducted in three phases. In the first phase participant generated a list of personally known items to be used as landmarks. In the second phase participant saw the first video and was required to remember the sequence of turns. In the second video participant was required to imagine a landmark from the list generated in the first phase at each turn and associate the turn with it. In both the task the participant was asked to recall the sequence of turns as it appeared in the video. In the third phase, which was 20 minutes after the second phase, participants again recalled the sequence of turns. Results showed that performance in the first condition i.e. without use of landmarks was better than imaginary landmark condition. The difference, however, became significant when the participant were tested again about 30 minutes later though performance was still better in no-landmark condition. The finding is surprising given the past research in memory and is explained in terms of cognitive factors such as mental workload. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wayfinding" title="Wayfinding">Wayfinding</a>, <a href="https://publications.waset.org/abstracts/search?q=Landmark" title=" Landmark"> Landmark</a>, <a href="https://publications.waset.org/abstracts/search?q=Homogenous%20Environment" title=" Homogenous Environment"> Homogenous Environment</a>, <a href="https://publications.waset.org/abstracts/search?q=Memory" title=" Memory"> Memory</a> </p> <a href="https://publications.waset.org/abstracts/25667/wayfinding-strategies-in-an-unfamiliar-homogenous-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25667.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">457</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3511</span> Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anoop%20T.%20R.">Anoop T. R.</a>, <a href="https://publications.waset.org/abstracts/search?q=Otman%20Basir"> Otman Basir</a>, <a href="https://publications.waset.org/abstracts/search?q=Robert%20F.%20Hess"> Robert F. Hess</a>, <a href="https://publications.waset.org/abstracts/search?q=Eileen%20E.%20Birch"> Eileen E. Birch</a>, <a href="https://publications.waset.org/abstracts/search?q=Brooke%20A.%20Koritala"> Brooke A. Koritala</a>, <a href="https://publications.waset.org/abstracts/search?q=Reed%20M.%20Jost"> Reed M. Jost</a>, <a href="https://publications.waset.org/abstracts/search?q=Becky%20Luu"> Becky Luu</a>, <a href="https://publications.waset.org/abstracts/search?q=David%20Stager"> David Stager</a>, <a href="https://publications.waset.org/abstracts/search?q=Ben%20Thompson"> Ben Thompson</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=strabismus" title="strabismus">strabismus</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20neural%20networks" title=" deep neural networks"> deep neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20landmarks" title=" facial landmarks"> facial landmarks</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20alignment" title=" face alignment"> face alignment</a>, <a href="https://publications.waset.org/abstracts/search?q=segmentation" title=" segmentation"> segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=VGG%2016" title=" VGG 16"> VGG 16</a>, <a href="https://publications.waset.org/abstracts/search?q=mask%20R-CNN" title=" mask R-CNN"> mask R-CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=pupil%20coordinates" title=" pupil coordinates"> pupil coordinates</a>, <a href="https://publications.waset.org/abstracts/search?q=angle%20deviation" title=" angle deviation"> angle deviation</a>, <a href="https://publications.waset.org/abstracts/search?q=horizontal%20and%20vertical%20deviation" title=" horizontal and vertical deviation"> horizontal and vertical deviation</a> </p> <a href="https://publications.waset.org/abstracts/170835/detection-and-classification-strabismus-using-convolutional-neural-network-and-spatial-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170835.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3510</span> Early Evaluation of Long-Span Suspension Bridges Using Smartphone Accelerometers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ekin%20Ozer">Ekin Ozer</a>, <a href="https://publications.waset.org/abstracts/search?q=Maria%20Q.%20Feng"> Maria Q. Feng</a>, <a href="https://publications.waset.org/abstracts/search?q=Rupa%20Purasinghe"> Rupa Purasinghe</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Structural deterioration of bridge systems possesses an ongoing threat to the transportation networks. Besides, landmark bridges’ integrity and safety are more than sole functionality, since they provide a strong presence for the society and nations. Therefore, an innovative and sustainable method to inspect landmark bridges is essential to ensure their resiliency in the long run. In this paper, a recently introduced concept, smartphone-based modal frequency estimation is addressed, and this paper targets to authenticate the fidelity of smartphone-based vibration measurements gathered from three landmark suspension bridges. Firstly, smartphones located at the bridge mid-span are adopted as portable and standalone vibration measurement devices. Then, their embedded accelerometers are utilized to gather vibration response under operational loads, and eventually frequency domain characteristics are deduced. The preliminary analysis results are compared with the reference publications and high-quality monitoring data to validate the usability of smartphones on long-span landmark suspension bridges. If the technical challenges such as high period of vibration, low amplitude excitation, embedded smartphone sensor features, sampling, and citizen engagement are tackled, smartphones can provide a novel and cost-free crowdsourcing tool for maintenance of these landmark structures. This study presents the early phase findings from three signature structures located in the United States. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=smart%20and%20mobile%20sensing" title="smart and mobile sensing">smart and mobile sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=structural%20health%20monitoring" title=" structural health monitoring"> structural health monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=suspension%20bridges" title=" suspension bridges"> suspension bridges</a>, <a href="https://publications.waset.org/abstracts/search?q=vibration%20analysis" title=" vibration analysis"> vibration analysis</a> </p> <a href="https://publications.waset.org/abstracts/81843/early-evaluation-of-long-span-suspension-bridges-using-smartphone-accelerometers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81843.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">292</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3509</span> A Geometric Based Hybrid Approach for Facial Feature Localization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Priya%20Saha">Priya Saha</a>, <a href="https://publications.waset.org/abstracts/search?q=Sourav%20Dey%20Roy%20Jr."> Sourav Dey Roy Jr.</a>, <a href="https://publications.waset.org/abstracts/search?q=Debotosh%20Bhattacharjee"> Debotosh Bhattacharjee</a>, <a href="https://publications.waset.org/abstracts/search?q=Mita%20Nasipuri"> Mita Nasipuri</a>, <a href="https://publications.waset.org/abstracts/search?q=Barin%20Kumar%20De"> Barin Kumar De</a>, <a href="https://publications.waset.org/abstracts/search?q=Mrinal%20Kanti%20Bhowmik"> Mrinal Kanti Bhowmik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometric face recognition technology (FRT) has gained a lot of attention due to its extensive variety of applications in both security and non-security perspectives. It has come into view to provide a secure solution in identification and verification of person identity. Although other biometric based methods like fingerprint scans, iris scans are available, FRT is verified as an efficient technology for its user-friendliness and contact freeness. Accurate facial feature localization plays an important role for many facial analysis applications including biometrics and emotion recognition. But, there are certain factors, which make facial feature localization a challenging task. On human face, expressions can be seen from the subtle movements of facial muscles and influenced by internal emotional states. These non-rigid facial movements cause noticeable alterations in locations of facial landmarks, their usual shapes, which sometimes create occlusions in facial feature areas making face recognition as a difficult problem. The paper proposes a new hybrid based technique for automatic landmark detection in both neutral and expressive frontal and near frontal face images. The method uses the concept of thresholding, sequential searching and other image processing techniques for locating the landmark points on the face. Also, a Graphical User Interface (GUI) based software is designed that could automatically detect 16 landmark points around eyes, nose and mouth that are mostly affected by the changes in facial muscles. The proposed system has been tested on widely used JAFFE and Cohn Kanade database. Also, the system is tested on DeitY-TU face database which is created in the Biometrics Laboratory of Tripura University under the research project funded by Department of Electronics & Information Technology, Govt. of India. The performance of the proposed method has been done in terms of error measure and accuracy. The method has detection rate of 98.82% on JAFFE database, 91.27% on Cohn Kanade database and 93.05% on DeitY-TU database. Also, we have done comparative study of our proposed method with other techniques developed by other researchers. This paper will put into focus emotion-oriented systems through AU detection in future based on the located features. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometrics" title="biometrics">biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20landmarks" title=" facial landmarks"> facial landmarks</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/22182/a-geometric-based-hybrid-approach-for-facial-feature-localization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22182.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">412</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3508</span> Monocular 3D Person Tracking AIA Demographic Classification and Projective Image Processing </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=McClain%20Thiel">McClain Thiel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection and localization has historically required two or more sensors due to the loss of information from 3D to 2D space, however, most surveillance systems currently in use in the real world only have one sensor per location. Generally, this consists of a single low-resolution camera positioned above the area under observation (mall, jewelry store, traffic camera). This is not sufficient for robust 3D tracking for applications such as security or more recent relevance, contract tracing. This paper proposes a lightweight system for 3D person tracking that requires no additional hardware, based on compressed object detection convolutional-nets, facial landmark detection, and projective geometry. This approach involves classifying the target into a demographic category and then making assumptions about the relative locations of facial landmarks from the demographic information, and from there using simple projective geometry and known constants to find the target's location in 3D space. Preliminary testing, although severely lacking, suggests reasonable success in 3D tracking under ideal conditions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=monocular%20distancing" title="monocular distancing">monocular distancing</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20analysis" title=" facial analysis"> facial analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20localization" title=" 3D localization "> 3D localization </a> </p> <a href="https://publications.waset.org/abstracts/129037/monocular-3d-person-tracking-aia-demographic-classification-and-projective-image-processing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129037.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3507</span> Efficient Signal Detection Using QRD-M Based on Channel Condition in MIMO-OFDM System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jae-Jeong%20Kim">Jae-Jeong Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Ki-Ro%20Kim"> Ki-Ro Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyoung-Kyu%20Song"> Hyoung-Kyu Song</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose an efficient signal detector that switches M parameter of QRD-M detection scheme is proposed for MIMO-OFDM system. The proposed detection scheme calculates the threshold by 1-norm condition number and then switches M parameter of QRD-M detection scheme according to channel information. If channel condition is bad, the parameter M is set to high value to increase the accuracy of detection. If channel condition is good, the parameter M is set to low value to reduce complexity of detection. Therefore, the proposed detection scheme has better trade off between BER performance and complexity than the conventional detection scheme. The simulation result shows that the complexity of proposed detection scheme is lower than QRD-M detection scheme with similar BER performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=MIMO-OFDM" title="MIMO-OFDM">MIMO-OFDM</a>, <a href="https://publications.waset.org/abstracts/search?q=QRD-M" title=" QRD-M"> QRD-M</a>, <a href="https://publications.waset.org/abstracts/search?q=channel%20condition" title=" channel condition"> channel condition</a>, <a href="https://publications.waset.org/abstracts/search?q=BER" title=" BER"> BER</a> </p> <a href="https://publications.waset.org/abstracts/3518/efficient-signal-detection-using-qrd-m-based-on-channel-condition-in-mimo-ofdm-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3518.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">370</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3506</span> Reduced Complexity of ML Detection Combined with DFE</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jae-Hyun%20Ro">Jae-Hyun Ro</a>, <a href="https://publications.waset.org/abstracts/search?q=Yong-Jun%20Kim"> Yong-Jun Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Chang-Bin%20Ha"> Chang-Bin Ha</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyoung-Kyu%20Song"> Hyoung-Kyu Song </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, many detection schemes have been developed to improve the error performance and to reduce the complexity. Maximum likelihood (ML) detection has optimal error performance but it has very high complexity. Thus, this paper proposes reduced complexity of ML detection combined with decision feedback equalizer (DFE). The error performance of the proposed detection scheme is higher than the conventional DFE. But the complexity of the proposed scheme is lower than the conventional ML detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=detection" title="detection">detection</a>, <a href="https://publications.waset.org/abstracts/search?q=DFE" title=" DFE"> DFE</a>, <a href="https://publications.waset.org/abstracts/search?q=MIMO-OFDM" title=" MIMO-OFDM"> MIMO-OFDM</a>, <a href="https://publications.waset.org/abstracts/search?q=ML" title=" ML"> ML</a> </p> <a href="https://publications.waset.org/abstracts/42215/reduced-complexity-of-ml-detection-combined-with-dfe" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42215.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">610</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3505</span> Cigarette Smoke Detection Based on YOLOV3</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei%20Li">Wei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Tuo%20Yang"> Tuo Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to satisfy the real-time and accurate requirements of cigarette smoke detection in complex scenes, a cigarette smoke detection technology based on the combination of deep learning and color features was proposed. Firstly, based on the color features of cigarette smoke, the suspicious cigarette smoke area in the image is extracted. Secondly, combined with the efficiency of cigarette smoke detection and the problem of network overfitting, a network model for cigarette smoke detection was designed according to YOLOV3 algorithm to reduce the false detection rate. The experimental results show that the method is feasible and effective, and the accuracy of cigarette smoke detection is up to 99.13%, which satisfies the requirements of real-time cigarette smoke detection in complex scenes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=cigarette%20smoke%20detection" title=" cigarette smoke detection"> cigarette smoke detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOV3" title=" YOLOV3"> YOLOV3</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20feature%20extraction" title=" color feature extraction"> color feature extraction</a> </p> <a href="https://publications.waset.org/abstracts/159151/cigarette-smoke-detection-based-on-yolov3" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/159151.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">87</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3504</span> An Architecture for New Generation of Distributed Intrusion Detection System Based on Preventive Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=H.%20Benmoussa">H. Benmoussa</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20A.%20El%20Kalam"> A. A. El Kalam</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Ait%20Ouahman"> A. Ait Ouahman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The design and implementation of intrusion detection systems (IDS) remain an important area of research in the security of information systems. Despite the importance and reputation of the current intrusion detection systems, their efficiency and effectiveness remain limited as they should include active defense approach to allow anticipating and predicting intrusions before their occurrence. Consequently, they must be readapted. For this purpose we suggest a new generation of distributed intrusion detection system based on preventive detection approach and using intelligent and mobile agents. Our architecture benefits from mobile agent features and addresses some of the issues with centralized and hierarchical models. Also, it presents advantages in terms of increasing scalability and flexibility. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Intrusion%20Detection%20System%20%28IDS%29" title="Intrusion Detection System (IDS)">Intrusion Detection System (IDS)</a>, <a href="https://publications.waset.org/abstracts/search?q=preventive%20detection" title=" preventive detection"> preventive detection</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20agents" title=" mobile agents"> mobile agents</a>, <a href="https://publications.waset.org/abstracts/search?q=distributed%20architecture" title=" distributed architecture"> distributed architecture</a> </p> <a href="https://publications.waset.org/abstracts/18239/an-architecture-for-new-generation-of-distributed-intrusion-detection-system-based-on-preventive-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18239.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">583</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3503</span> Video Based Ambient Smoke Detection By Detecting Directional Contrast Decrease</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Omair%20Ghori">Omair Ghori</a>, <a href="https://publications.waset.org/abstracts/search?q=Anton%20Stadler"> Anton Stadler</a>, <a href="https://publications.waset.org/abstracts/search?q=Stefan%20Wilk"> Stefan Wilk</a>, <a href="https://publications.waset.org/abstracts/search?q=Wolfgang%20Effelsberg"> Wolfgang Effelsberg</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Fire-related incidents account for extensive loss of life and material damage. Quick and reliable detection of occurring fires has high real world implications. Whereas a major research focus lies on the detection of outdoor fires, indoor camera-based fire detection is still an open issue. Cameras in combination with computer vision helps to detect flames and smoke more quickly than conventional fire detectors. In this work, we present a computer vision-based smoke detection algorithm based on contrast changes and a multi-step classification. This work accelerates computer vision-based fire detection considerably in comparison with classical indoor-fire detection. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=contrast%20analysis" title="contrast analysis">contrast analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=early%20fire%20detection" title=" early fire detection"> early fire detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20smoke%20detection" title=" video smoke detection"> video smoke detection</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/52006/video-based-ambient-smoke-detection-by-detecting-directional-contrast-decrease" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52006.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">447</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3502</span> Intrusion Detection Techniques in NaaS in the Cloud: A Review </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rashid%20Mahmood">Rashid Mahmood</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The network as a service (NaaS) usage has been well-known from the last few years in the many applications, like mission critical applications. In the NaaS, prevention method is not adequate as the security concerned, so the detection method should be added to the security issues in NaaS. The authentication and encryption are considered the first solution of the NaaS problem whereas now these are not sufficient as NaaS use is increasing. In this paper, we are going to present the concept of intrusion detection and then survey some of major intrusion detection techniques in NaaS and aim to compare in some important fields. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=IDS" title="IDS">IDS</a>, <a href="https://publications.waset.org/abstracts/search?q=cloud" title=" cloud"> cloud</a>, <a href="https://publications.waset.org/abstracts/search?q=naas" title=" naas"> naas</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a> </p> <a href="https://publications.waset.org/abstracts/36475/intrusion-detection-techniques-in-naas-in-the-cloud-a-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36475.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3501</span> Multichannel Object Detection with Event Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rafael%20Iliasov">Rafael Iliasov</a>, <a href="https://publications.waset.org/abstracts/search?q=Alessandro%20Golkar"> Alessandro Golkar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object detection based on event vision has been a dynamically growing field in computer vision for the last 16 years. In this work, we create multiple channels from a single event camera and propose an event fusion method (EFM) to enhance object detection in event-based vision systems. Each channel uses a different accumulation buffer to collect events from the event camera. We implement YOLOv7 for object detection, followed by a fusion algorithm. Our multichannel approach outperforms single-channel-based object detection by 0.7% in mean Average Precision (mAP) for detection overlapping ground truth with IOU = 0.5. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=event%20camera" title="event camera">event camera</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection%20with%20multimodal%20inputs" title=" object detection with multimodal inputs"> object detection with multimodal inputs</a>, <a href="https://publications.waset.org/abstracts/search?q=multichannel%20fusion" title=" multichannel fusion"> multichannel fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a> </p> <a href="https://publications.waset.org/abstracts/190247/multichannel-object-detection-with-event-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/190247.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">27</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3500</span> Securing Web Servers by the Intrusion Detection System (IDS)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yousef%20Farhaoui">Yousef Farhaoui </a> </p> <p class="card-text"><strong>Abstract:</strong></p> An IDS is a tool which is used to improve the level of security. We present in this paper different architectures of IDS. We will also discuss measures that define the effectiveness of IDS and the very recent works of standardization and homogenization of IDS. At the end, we propose a new model of IDS called BiIDS (IDS Based on the two principles of detection) for securing web servers and applications by the Intrusion Detection System (IDS). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intrusion%20detection" title="intrusion detection">intrusion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=architectures" title=" architectures"> architectures</a>, <a href="https://publications.waset.org/abstracts/search?q=characteristic" title=" characteristic"> characteristic</a>, <a href="https://publications.waset.org/abstracts/search?q=tools" title=" tools"> tools</a>, <a href="https://publications.waset.org/abstracts/search?q=security" title=" security"> security</a>, <a href="https://publications.waset.org/abstracts/search?q=web%20server" title=" web server"> web server</a> </p> <a href="https://publications.waset.org/abstracts/13346/securing-web-servers-by-the-intrusion-detection-system-ids" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/13346.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">418</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3499</span> Suggestion for Malware Detection Agent Considering Network Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ji-Hoon%20Hong">Ji-Hoon Hong</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong-Hee%20Kim"> Dong-Hee Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Nam-Uk%20Kim"> Nam-Uk Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Tai-Myoung%20Chung"> Tai-Myoung Chung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Smartphone users are increasing rapidly. Accordingly, many companies are running BYOD (Bring Your Own Device: Policies to bring private-smartphones to the company) policy to increase work efficiency. However, smartphones are always under the threat of malware, thus the company network that is connected smartphone is exposed to serious risks. Most smartphone malware detection techniques are to perform an independent detection (perform the detection of a single target application). In this paper, we analyzed a variety of intrusion detection techniques. Based on the results of analysis propose an agent using the network IDS. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=android%20malware%20detection" title="android malware detection">android malware detection</a>, <a href="https://publications.waset.org/abstracts/search?q=software-defined%20network" title=" software-defined network"> software-defined network</a>, <a href="https://publications.waset.org/abstracts/search?q=interaction%20environment" title=" interaction environment"> interaction environment</a>, <a href="https://publications.waset.org/abstracts/search?q=android%20malware%20detection" title=" android malware detection"> android malware detection</a>, <a href="https://publications.waset.org/abstracts/search?q=software-defined%20network" title=" software-defined network"> software-defined network</a>, <a href="https://publications.waset.org/abstracts/search?q=interaction%20environment" title=" interaction environment"> interaction environment</a> </p> <a href="https://publications.waset.org/abstracts/39330/suggestion-for-malware-detection-agent-considering-network-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39330.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">433</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3498</span> Improved Skin Detection Using Colour Space and Texture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Medjram%20Sofiane">Medjram Sofiane</a>, <a href="https://publications.waset.org/abstracts/search?q=Babahenini%20Mohamed%20Chaouki"> Babahenini Mohamed Chaouki</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Benali%20Yamina"> Mohamed Benali Yamina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Skin detection is an important task for computer vision systems. A good method for skin detection means a good and successful result of the system. The colour is a good descriptor that allows us to detect skin colour in the images, but because of lightings effects and objects that have a similar colour skin, skin detection becomes difficult. In this paper, we proposed a method using the YCbCr colour space for skin detection and lighting effects elimination, then we use the information of texture to eliminate the false regions detected by the YCbCr colour skin model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=skin%20detection" title="skin detection">skin detection</a>, <a href="https://publications.waset.org/abstracts/search?q=YCbCr" title=" YCbCr"> YCbCr</a>, <a href="https://publications.waset.org/abstracts/search?q=GLCM" title=" GLCM"> GLCM</a>, <a href="https://publications.waset.org/abstracts/search?q=texture" title=" texture"> texture</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20skin" title=" human skin"> human skin</a> </p> <a href="https://publications.waset.org/abstracts/19039/improved-skin-detection-using-colour-space-and-texture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19039.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">459</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3497</span> Real-Time Detection of Space Manipulator Self-Collision</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhang%20Xiaodong">Zhang Xiaodong</a>, <a href="https://publications.waset.org/abstracts/search?q=Tang%20Zixin"> Tang Zixin</a>, <a href="https://publications.waset.org/abstracts/search?q=Liu%20Xin"> Liu Xin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to avoid self-collision of space manipulators during operation process, a real-time detection method is proposed in this paper. The manipulator is fitted into a cylinder enveloping surface, and then the detection algorithm of collision between cylinders is analyzed. The collision model of space manipulator self-links can be detected by using this algorithm in real-time detection during the operation process. To ensure security of the operation, a safety threshold is designed. The simulation and experiment results verify the effectiveness of the proposed algorithm for a 7-DOF space manipulator. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=space%20manipulator" title="space manipulator">space manipulator</a>, <a href="https://publications.waset.org/abstracts/search?q=collision%20detection" title=" collision detection"> collision detection</a>, <a href="https://publications.waset.org/abstracts/search?q=self-collision" title=" self-collision"> self-collision</a>, <a href="https://publications.waset.org/abstracts/search?q=the%20real-time%20collision%20detection" title=" the real-time collision detection"> the real-time collision detection</a> </p> <a href="https://publications.waset.org/abstracts/23258/real-time-detection-of-space-manipulator-self-collision" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/23258.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">469</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3496</span> Iris Detection on RGB Image for Controlling Side Mirror</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Norzalina%20Othman">Norzalina Othman</a>, <a href="https://publications.waset.org/abstracts/search?q=Nurul%20Na%E2%80%99imy%20Wan"> Nurul Na’imy Wan</a>, <a href="https://publications.waset.org/abstracts/search?q=Azliza%20Mohd%20Rusli"> Azliza Mohd Rusli</a>, <a href="https://publications.waset.org/abstracts/search?q=Wan%20Noor%20Syahirah%20Meor%20Idris"> Wan Noor Syahirah Meor Idris</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Iris detection is a process where the position of the eyes is extracted from the face images. It is a current method used for many applications such as for security purpose and drowsiness detection. This paper proposes the use of eyes detection in controlling side mirror of motor vehicles. The eyes detection method aims to make driver easy to adjust the side mirrors automatically. The system will determine the midpoint coordinate of eyes detection on RGB (color) image and the input signal from y-coordinate will send it to controller in order to rotate the angle of side mirror on vehicle. The eye position was cropped and the coordinate of midpoint was successfully detected from the circle of iris detection using Viola Jones detection and circular Hough transform methods on RGB image. The coordinate of midpoint from the experiment are tested using controller to determine the angle of rotation on the side mirrors. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iris%20detection" title="iris detection">iris detection</a>, <a href="https://publications.waset.org/abstracts/search?q=midpoint%20coordinates" title=" midpoint coordinates"> midpoint coordinates</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB%20images" title=" RGB images"> RGB images</a>, <a href="https://publications.waset.org/abstracts/search?q=side%20mirror" title=" side mirror"> side mirror</a> </p> <a href="https://publications.waset.org/abstracts/8133/iris-detection-on-rgb-image-for-controlling-side-mirror" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8133.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">423</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3495</span> Automatic Vehicle Detection Using Circular Synthetic Aperture Radar Image</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Leping%20Chen">Leping Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Daoxiang%20An"> Daoxiang An</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaotao%20Huang"> Xiaotao Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic vehicle detection using synthetic aperture radar (SAR) image has been widely researched, as well as using optical remote sensing images. However, most researches treat the detection as an independent problem, failing to make full use of SAR data information. In circular SAR (CSAR), the two long borders of vehicle will shrink if the imaging surface is set higher than the reference one. Based on above variance, an automatic vehicle detection using CSAR image is proposed to enhance detection ability under complex environment, such as vehicles’ closely packing, which confuses the detector. The detection method uses the multiple images generated by different height plane to obtain an energy-concentrated image for detecting and then uses the maximally stable extremal regions method (MSER) to detect vehicles. A result of vehicles’ detection is given to verify the effectiveness and correctness of proposed method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=circular%20SAR" title="circular SAR">circular SAR</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20detection" title=" vehicle detection"> vehicle detection</a>, <a href="https://publications.waset.org/abstracts/search?q=automatic" title=" automatic"> automatic</a>, <a href="https://publications.waset.org/abstracts/search?q=imaging" title=" imaging"> imaging</a> </p> <a href="https://publications.waset.org/abstracts/84548/automatic-vehicle-detection-using-circular-synthetic-aperture-radar-image" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84548.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">367</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3494</span> Adaptive CFAR Analysis for Non-Gaussian Distribution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bouchemha%20Amel">Bouchemha Amel</a>, <a href="https://publications.waset.org/abstracts/search?q=Chachoui%20Takieddine"> Chachoui Takieddine</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20Maalem"> H. Maalem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Automatic detection of targets in a modern communication system RADAR is based primarily on the concept of adaptive CFAR detector. To have an effective detection, we must minimize the influence of disturbances due to the clutter. The detection algorithm adapts the CFAR detection threshold which is proportional to the average power of the clutter, maintaining a constant probability of false alarm. In this article, we analyze the performance of two variants of adaptive algorithms CA-CFAR and OS-CFAR and we compare the thresholds of these detectors in the marine environment (no-Gaussian) with a Weibull distribution. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CFAR" title="CFAR">CFAR</a>, <a href="https://publications.waset.org/abstracts/search?q=threshold" title=" threshold"> threshold</a>, <a href="https://publications.waset.org/abstracts/search?q=clutter" title=" clutter"> clutter</a>, <a href="https://publications.waset.org/abstracts/search?q=distribution" title=" distribution"> distribution</a>, <a href="https://publications.waset.org/abstracts/search?q=Weibull" title=" Weibull"> Weibull</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a> </p> <a href="https://publications.waset.org/abstracts/21359/adaptive-cfar-analysis-for-non-gaussian-distribution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21359.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">588</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3493</span> Intrusion Detection Techniques in Mobile Adhoc Networks: A Review</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rashid%20Mahmood">Rashid Mahmood</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Junaid%20Sarwar"> Muhammad Junaid Sarwar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Mobile ad hoc networks (MANETs) use has been well-known from the last few years in the many applications, like mission critical applications. In the (MANETS) prevention method is not adequate as the security concerned, so the detection method should be added to the security issues in (MANETs). The authentication and encryption is considered the first solution of the MANETs problem where as now these are not sufficient as MANET use is increasing. In this paper we are going to present the concept of intrusion detection and then survey some of major intrusion detection techniques in MANET and aim to comparing in some important fields. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=MANET" title="MANET">MANET</a>, <a href="https://publications.waset.org/abstracts/search?q=IDS" title=" IDS"> IDS</a>, <a href="https://publications.waset.org/abstracts/search?q=intrusions" title=" intrusions"> intrusions</a>, <a href="https://publications.waset.org/abstracts/search?q=signature" title=" signature"> signature</a>, <a href="https://publications.waset.org/abstracts/search?q=detection" title=" detection"> detection</a>, <a href="https://publications.waset.org/abstracts/search?q=prevention" title=" prevention"> prevention</a> </p> <a href="https://publications.waset.org/abstracts/32173/intrusion-detection-techniques-in-mobile-adhoc-networks-a-review" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32173.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">378</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3492</span> Plant Disease Detection Using Image Processing and Machine Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sanskar">Sanskar</a>, <a href="https://publications.waset.org/abstracts/search?q=Abhinav%20Pal"> Abhinav Pal</a>, <a href="https://publications.waset.org/abstracts/search?q=Aryush%20Gupta"> Aryush Gupta</a>, <a href="https://publications.waset.org/abstracts/search?q=Sushil%20Kumar%20Mishra"> Sushil Kumar Mishra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the critical and tedious assignments in agricultural practices is the detection of diseases on vegetation. Agricultural production is very important in today’s economy because plant diseases are common, and early detection of plant diseases is important in agriculture. Automatic detection of such early diseases is useful because it reduces control efforts in large productive farms. Using digital image processing and machine learning algorithms, this paper presents a method for plant disease detection. Detection of the disease occurs on different leaves of the plant. The proposed system for plant disease detection is simple and computationally efficient, requiring less time than learning-based approaches. The accuracy of various plant and foliar diseases is calculated and presented in this paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=plant%20diseases" title="plant diseases">plant diseases</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/194420/plant-disease-detection-using-image-processing-and-machine-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/194420.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">7</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3491</span> A Comparative Study of Virus Detection Techniques</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sulaiman%20Al%20amro">Sulaiman Al amro</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Alkhalifah"> Ali Alkhalifah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The growing number of computer viruses and the detection of zero day malware have been the concern for security researchers for a large period of time. Existing antivirus products (AVs) rely on detecting virus signatures which do not provide a full solution to the problems associated with these viruses. The use of logic formulae to model the behaviour of viruses is one of the most encouraging recent developments in virus research, which provides alternatives to classic virus detection methods. In this paper, we proposed a comparative study about different virus detection techniques. This paper provides the advantages and drawbacks of different detection techniques. Different techniques will be used in this paper to provide a discussion about what technique is more effective to detect computer viruses. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20viruses" title="computer viruses">computer viruses</a>, <a href="https://publications.waset.org/abstracts/search?q=virus%20detection" title=" virus detection"> virus detection</a>, <a href="https://publications.waset.org/abstracts/search?q=signature-based" title=" signature-based"> signature-based</a>, <a href="https://publications.waset.org/abstracts/search?q=behaviour-based" title=" behaviour-based"> behaviour-based</a>, <a href="https://publications.waset.org/abstracts/search?q=heuristic-based" title=" heuristic-based "> heuristic-based </a> </p> <a href="https://publications.waset.org/abstracts/28688/a-comparative-study-of-virus-detection-techniques" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28688.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">484</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=landmark%20detection&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=landmark%20detection&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=landmark%20detection&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=landmark%20detection&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=landmark%20detection&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=landmark%20detection&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=landmark%20detection&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=landmark%20detection&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=landmark%20detection&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=landmark%20detection&amp;page=117">117</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=landmark%20detection&amp;page=118">118</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=landmark%20detection&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10