CINXE.COM
Search results for: recognize
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: recognize</title> <meta name="description" content="Search results for: recognize"> <meta name="keywords" content="recognize"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="recognize" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="recognize"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 657</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: recognize</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">657</span> Fitness Action Recognition Based on MediaPipe</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zixuan%20Xu">Zixuan Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yichun%20Lou"> Yichun Lou</a>, <a href="https://publications.waset.org/abstracts/search?q=Yang%20Song"> Yang Song</a>, <a href="https://publications.waset.org/abstracts/search?q=Zihuai%20Lin"> Zihuai Lin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> MediaPipe is an open-source machine learning computer vision framework that can be ported into a multi-platform environment, which makes it easier to use it to recognize the human activity. Based on this framework, many human recognition systems have been created, but the fundamental issue is the recognition of human behavior and posture. In this paper, two methods are proposed to recognize human gestures based on MediaPipe, the first one uses the Adaptive Boosting algorithm to recognize a series of fitness gestures, and the second one uses the Fast Dynamic Time Warping algorithm to recognize 413 continuous fitness actions. These two methods are also applicable to any human posture movement recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=MediaPipe" title=" MediaPipe"> MediaPipe</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20boosting" title=" adaptive boosting"> adaptive boosting</a>, <a href="https://publications.waset.org/abstracts/search?q=fast%20dynamic%20time%20warping" title=" fast dynamic time warping"> fast dynamic time warping</a> </p> <a href="https://publications.waset.org/abstracts/160758/fitness-action-recognition-based-on-mediapipe" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160758.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">118</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">656</span> Using Scale Invariant Feature Transform Features to Recognize Characters in Natural Scene Images </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Belaynesh%20Chekol">Belaynesh Chekol</a>, <a href="https://publications.waset.org/abstracts/search?q=Numan%20%C3%87elebi"> Numan Çelebi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The main purpose of this work is to recognize individual characters extracted from natural scene images using scale invariant feature transform (SIFT) features as an input to K-nearest neighbor (KNN); a classification learner algorithm. For this task, 1,068 and 78 images of English alphabet characters taken from Chars74k data set is used to train and test the classifier respectively. For each character image, We have generated describing features by using SIFT algorithm. This set of features is fed to the learner so that it can recognize and label new images of English characters. Two types of KNN (fine KNN and weighted KNN) were trained and the resulted classification accuracy is 56.9% and 56.5% respectively. The training time taken was the same for both fine and weighted KNN. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=character%20recognition" title="character recognition">character recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=KNN" title=" KNN"> KNN</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20scene%20image" title=" natural scene image"> natural scene image</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a> </p> <a href="https://publications.waset.org/abstracts/58580/using-scale-invariant-feature-transform-features-to-recognize-characters-in-natural-scene-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/58580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">281</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">655</span> Autonomous Quantum Competitive Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20A.%20Zidan">Mohammed A. Zidan</a>, <a href="https://publications.waset.org/abstracts/search?q=Alaa%20Sagheer"> Alaa Sagheer</a>, <a href="https://publications.waset.org/abstracts/search?q=Nasser%20Metwally"> Nasser Metwally</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Real-time learning is an important goal that most of artificial intelligence researches try to achieve it. There are a lot of problems and applications which require low cost learning such as learn a robot to be able to classify and recognize patterns in real time and real-time recall. In this contribution, we suggest a model of quantum competitive learning based on a series of quantum gates and additional operator. The proposed model enables to recognize any incomplete patterns, where we can increase the probability of recognizing the pattern at the expense of the undesired ones. Moreover, these undesired ones could be utilized as new patterns for the system. The proposed model is much better compared with classical approaches and more powerful than the current quantum competitive learning approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=competitive%20learning" title="competitive learning">competitive learning</a>, <a href="https://publications.waset.org/abstracts/search?q=quantum%20gates" title=" quantum gates"> quantum gates</a>, <a href="https://publications.waset.org/abstracts/search?q=quantum%20gates" title=" quantum gates"> quantum gates</a>, <a href="https://publications.waset.org/abstracts/search?q=winner-take-all" title=" winner-take-all"> winner-take-all</a> </p> <a href="https://publications.waset.org/abstracts/25398/autonomous-quantum-competitive-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25398.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">472</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">654</span> Real-Time Gesture Recognition System Using Microsoft Kinect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ankita%20Wadhawan">Ankita Wadhawan</a>, <a href="https://publications.waset.org/abstracts/search?q=Parteek%20Kumar"> Parteek Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Umesh%20Kumar"> Umesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Gesture is any body movement that expresses some attitude or any sentiment. Gestures as a sign language are used by deaf people for conveying messages which helps in eliminating the communication barrier between deaf people and normal persons. Nowadays, everybody is using mobile phone and computer as a very important gadget in their life. But there are some physically challenged people who are blind/deaf and the use of mobile phone or computer like device is very difficult for them. So, there is an immense need of a system which works on body gesture or sign language as input. In this research, Microsoft Kinect Sensor, SDK V2 and Hidden Markov Toolkit (HTK) are used to recognize the object, motion of object and human body joints through Touch less NUI (Natural User Interface) in real-time. The depth data collected from Microsoft Kinect has been used to recognize gestures of Indian Sign Language (ISL). The recorded clips are analyzed using depth, IR and skeletal data at different angles and positions. The proposed system has an average accuracy of 85%. The developed Touch less NUI provides an interface to recognize gestures and controls the cursor and click operation in computer just by waving hand gesture. This research will help deaf people to make use of mobile phones, computers and socialize among other persons in the society. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Indian%20sign%20language" title=" Indian sign language"> Indian sign language</a>, <a href="https://publications.waset.org/abstracts/search?q=Microsoft%20Kinect" title=" Microsoft Kinect"> Microsoft Kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20user%20interface" title=" natural user interface"> natural user interface</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a> </p> <a href="https://publications.waset.org/abstracts/88362/real-time-gesture-recognition-system-using-microsoft-kinect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/88362.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">653</span> Foot Recognition Using Deep Learning for Knee Rehabilitation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rakkrit%20Duangsoithong">Rakkrit Duangsoithong</a>, <a href="https://publications.waset.org/abstracts/search?q=Jermphiphut%20Jaruenpunyasak"> Jermphiphut Jaruenpunyasak</a>, <a href="https://publications.waset.org/abstracts/search?q=Alba%20Garcia"> Alba Garcia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The use of foot recognition can be applied in many medical fields such as the gait pattern analysis and the knee exercises of patients in rehabilitation. Generally, a camera-based foot recognition system is intended to capture a patient image in a controlled room and background to recognize the foot in the limited views. However, this system can be inconvenient to monitor the knee exercises at home. In order to overcome these problems, this paper proposes to use the deep learning method using Convolutional Neural Networks (CNNs) for foot recognition. The results are compared with the traditional classification method using LBP and HOG features with kNN and SVM classifiers. According to the results, deep learning method provides better accuracy but with higher complexity to recognize the foot images from online databases than the traditional classification method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=foot%20recognition" title="foot recognition">foot recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=knee%20rehabilitation" title=" knee rehabilitation"> knee rehabilitation</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network" title=" convolutional neural network"> convolutional neural network</a> </p> <a href="https://publications.waset.org/abstracts/105495/foot-recognition-using-deep-learning-for-knee-rehabilitation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/105495.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">161</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">652</span> Gene Names Identity Recognition Using Siamese Network for Biomedical Publications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Micheal%20Olaolu%20Arowolo">Micheal Olaolu Arowolo</a>, <a href="https://publications.waset.org/abstracts/search?q=Muhammad%20Azam"> Muhammad Azam</a>, <a href="https://publications.waset.org/abstracts/search?q=Fei%20He"> Fei He</a>, <a href="https://publications.waset.org/abstracts/search?q=Mihail%20Popescu"> Mihail Popescu</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong%20Xu"> Dong Xu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As the quantity of biological articles rises, so does the number of biological route figures. Each route figure shows gene names and relationships. Annotating pathway diagrams manually is time-consuming. Advanced image understanding models could speed up curation, but they must be more precise. There is rich information in biological pathway figures. The first step to performing image understanding of these figures is to recognize gene names automatically. Classical optical character recognition methods have been employed for gene name recognition, but they are not optimized for literature mining data. This study devised a method to recognize an image bounding box of gene name as a photo using deep Siamese neural network models to outperform the existing methods using ResNet, DenseNet and Inception architectures, the results obtained about 84% accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biological%20pathway" title="biological pathway">biological pathway</a>, <a href="https://publications.waset.org/abstracts/search?q=gene%20identification" title=" gene identification"> gene identification</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20detection" title=" object detection"> object detection</a>, <a href="https://publications.waset.org/abstracts/search?q=Siamese%20network" title=" Siamese network"> Siamese network</a> </p> <a href="https://publications.waset.org/abstracts/160725/gene-names-identity-recognition-using-siamese-network-for-biomedical-publications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160725.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">292</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">651</span> English Loanwords in the Egyptian Variety of Arabic: Morphological and Phonological Changes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohamed%20Yacoub">Mohamed Yacoub </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates the English loanwords in the Egyptian variety of Arabic and reaches three findings. Data, in the first finding, were collected from Egyptian movies and soap operas; over two hundred words have been borrowed from English, code-switching was not included. These words then have been put into eleven different categories according to their use and part of speech. Finding two addresses the morphological and phonological change that occurred to these words. Regarding the phonological change, eight categories were found in both consonant and vowel variation, five for consonants and three for vowels. Examples were given for each. Regarding the morphological change, five categories were found including the masculine, feminine, dual, broken, and non-pluralize-able nouns. The last finding is the answers to a four-question survey that addresses forty eight native speakers of Egyptian Arabic and found that most participants did not recognize English borrowed words and thought they were originally Arabic and could not give Arabic equivalents for the loanwords that they could recognize. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sociolinguistics" title="sociolinguistics">sociolinguistics</a>, <a href="https://publications.waset.org/abstracts/search?q=loanwords" title=" loanwords"> loanwords</a>, <a href="https://publications.waset.org/abstracts/search?q=borrowing" title=" borrowing"> borrowing</a>, <a href="https://publications.waset.org/abstracts/search?q=morphology" title=" morphology"> morphology</a>, <a href="https://publications.waset.org/abstracts/search?q=phonology" title=" phonology"> phonology</a>, <a href="https://publications.waset.org/abstracts/search?q=variation" title=" variation"> variation</a>, <a href="https://publications.waset.org/abstracts/search?q=Egyptian%20dialect" title=" Egyptian dialect"> Egyptian dialect</a> </p> <a href="https://publications.waset.org/abstracts/40179/english-loanwords-in-the-egyptian-variety-of-arabic-morphological-and-phonological-changes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/40179.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">386</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">650</span> A Comparison of Brands Equity between Samsung and Apple in the View of Students of Management Science Faculty, Suan Sunandha Rajabhat University</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Somsak%20Klaysung">Somsak Klaysung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims to investigate the comparison of brands equity between Samsung and Apple from students of Suan Sunandha Rajabhat University. The research method will using quantitative research, data was collected by questionnaires distributed to communication of arts students in the faculty of management science of Suan Sunandha Rajabhat University for 100 samples by purposive sampling method. Data was analyzed by descriptive statistic including percentage, mean, standard deviation and inferential statistic is t-test for hypothesis testing. The results showed that brands equity between Apple and Samsung brand have the ability to recognize brand from the customer by perceived value of the uniqueness of brand and recall when in a situation that must be purchased (Salience), which is the lowest level in branding and consumers can recognize the capacity of the product (Judgment) and opinions about the quality and reliability when it comes to mobile phones Apple and Samsung brand are not different. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Apple%20and%20Samsung%20brand" title="Apple and Samsung brand">Apple and Samsung brand</a>, <a href="https://publications.waset.org/abstracts/search?q=brand%20equity" title=" brand equity"> brand equity</a>, <a href="https://publications.waset.org/abstracts/search?q=judgment" title=" judgment"> judgment</a>, <a href="https://publications.waset.org/abstracts/search?q=performance" title=" performance"> performance</a>, <a href="https://publications.waset.org/abstracts/search?q=resonance" title=" resonance"> resonance</a>, <a href="https://publications.waset.org/abstracts/search?q=salience" title=" salience"> salience</a> </p> <a href="https://publications.waset.org/abstracts/39942/a-comparison-of-brands-equity-between-samsung-and-apple-in-the-view-of-students-of-management-science-faculty-suan-sunandha-rajabhat-university" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39942.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">215</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">649</span> A Web-Based Self-Learning Grammar for Spoken Language Understanding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Biondi">S. Biondi</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Catania"> V. Catania</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Di%20Natale"> R. Di Natale</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20R.%20Intilisano"> A. R. Intilisano</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Panno"> D. Panno</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the major goals of Spoken Dialog Systems (SDS) is to understand what the user utters. In the SDS domain, the Spoken Language Understanding (SLU) Module classifies user utterances by means of a pre-definite conceptual knowledge. The SLU module is able to recognize only the meaning previously included in its knowledge base. Due the vastity of that knowledge, the information storing is a very expensive process. Updating and managing the knowledge base are time-consuming and error-prone processes because of the rapidly growing number of entities like proper nouns and domain-specific nouns. This paper proposes a solution to the problem of Name Entity Recognition (NER) applied to a SDS domain. The proposed solution attempts to automatically recognize the meaning associated with an utterance by using the PANKOW (Pattern based Annotation through Knowledge On the Web) method at runtime. The method being proposed extracts information from the Web to increase the SLU knowledge module and reduces the development effort. In particular, the Google Search Engine is used to extract information from the Facebook social network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=spoken%20dialog%20system" title="spoken dialog system">spoken dialog system</a>, <a href="https://publications.waset.org/abstracts/search?q=spoken%20language%20understanding" title=" spoken language understanding"> spoken language understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=web%20semantic" title=" web semantic"> web semantic</a>, <a href="https://publications.waset.org/abstracts/search?q=name%20entity%20recognition" title=" name entity recognition"> name entity recognition</a> </p> <a href="https://publications.waset.org/abstracts/12862/a-web-based-self-learning-grammar-for-spoken-language-understanding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12862.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">338</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">648</span> A Spatial Point Pattern Analysis to Recognize Fail Bit Patterns in Semiconductor Manufacturing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Youngji%20Yoo">Youngji Yoo</a>, <a href="https://publications.waset.org/abstracts/search?q=Seung%20Hwan%20Park"> Seung Hwan Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Daewoong%20An"> Daewoong An</a>, <a href="https://publications.waset.org/abstracts/search?q=Sung-Shick%20Kim"> Sung-Shick Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Jun-Geol%20Baek"> Jun-Geol Baek</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The yield management system is very important to produce high-quality semiconductor chips in the semiconductor manufacturing process. In order to improve quality of semiconductors, various tests are conducted in the post fabrication (FAB) process. During the test process, large amount of data are collected and the data includes a lot of information about defect. In general, the defect on the wafer is the main causes of yield loss. Therefore, analyzing the defect data is necessary to improve performance of yield prediction. The wafer bin map (WBM) is one of the data collected in the test process and includes defect information such as the fail bit patterns. The fail bit has characteristics of spatial point patterns. Therefore, this paper proposes the feature extraction method using the spatial point pattern analysis. Actual data obtained from the semiconductor process is used for experiments and the experimental result shows that the proposed method is more accurately recognize the fail bit patterns. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=semiconductor" title="semiconductor">semiconductor</a>, <a href="https://publications.waset.org/abstracts/search?q=wafer%20bin%20map" title=" wafer bin map"> wafer bin map</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20point%20patterns" title=" spatial point patterns"> spatial point patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=contour%20map" title=" contour map"> contour map</a> </p> <a href="https://publications.waset.org/abstracts/5010/a-spatial-point-pattern-analysis-to-recognize-fail-bit-patterns-in-semiconductor-manufacturing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5010.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">384</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">647</span> Water Detection in Aerial Images Using Fuzzy Sets</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Caio%20Marcelo%20Nunes">Caio Marcelo Nunes</a>, <a href="https://publications.waset.org/abstracts/search?q=Anderson%20da%20Silva%20Soares"> Anderson da Silva Soares</a>, <a href="https://publications.waset.org/abstracts/search?q=Gustavo%20Teodoro%20Laureano"> Gustavo Teodoro Laureano</a>, <a href="https://publications.waset.org/abstracts/search?q=Clarimar%20Jose%20Coelho"> Clarimar Jose Coelho</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a methodology to pixel recognition in aerial images using fuzzy $c$-means algorithm. This algorithm is a alternative to recognize areas considering uncertainties and inaccuracies. Traditional clustering technics are used in recognizing of multispectral images of earth's surface. This technics recognize well-defined borders that can be easily discretized. However, in the real world there are many areas with uncertainties and inaccuracies which can be mapped by clustering algorithms that use fuzzy sets. The methodology presents in this work is applied to multispectral images obtained from Landsat-5/TM satellite. The pixels are joined using the $c$-means algorithm. After, a classification process identify the types of surface according the patterns obtained from spectral response of image surface. The classes considered are, exposed soil, moist soil, vegetation, turbid water and clean water. The results obtained shows that the fuzzy clustering identify the real type of the earth's surface. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aerial%20images" title="aerial images">aerial images</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20clustering" title=" fuzzy clustering"> fuzzy clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a> </p> <a href="https://publications.waset.org/abstracts/30574/water-detection-in-aerial-images-using-fuzzy-sets" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30574.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">482</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">646</span> Investigating Transformative Processes through Personal, social, Professional and Educational Development of Adult Graduates in Second Chance Schools in Greece: a Quantitative and Qualitative Survey throughout the Country</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Christina%20Kalogirou">Christina Kalogirou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The object of this research is to explore the views of Greek Second Chance Schools’ (SCS) graduates regarding their personal, social, professional and educational development after graduation. SCS are addressed to adults who had failed to complete their studies in the nine-year compulsory education. Furthermore, the research focuses on their motives as well as on any possible achievement of transformative processes. The quantitative survey involved in total 426 graduates while in the qualitative survey participated 38 persons, all of whom graduated in the period 2010-2012 from 27 schools throughout the country. The survey was conducted by filling in a structured questionnaire and by carrying out semi-structured interviews. As regards the results, the respondents decided to attend the SCS primarily to acquire knowledge while most of them feel that they managed to meet their goals. Also, graduates recognize that studying in SCS contributed primarily in their social and personal development. In addition, an encouraging fact is that some of the graduates recognize the transformative processes which they experienced during their studies in SCS. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adults%20Education" title="Adults Education">Adults Education</a>, <a href="https://publications.waset.org/abstracts/search?q=Motives%20of%20Attendance" title=" Motives of Attendance"> Motives of Attendance</a>, <a href="https://publications.waset.org/abstracts/search?q=Personal-Social-Professional-Educational%20Development" title=" Personal-Social-Professional-Educational Development"> Personal-Social-Professional-Educational Development</a>, <a href="https://publications.waset.org/abstracts/search?q=Transformative%20Processes" title=" Transformative Processes"> Transformative Processes</a>, <a href="https://publications.waset.org/abstracts/search?q=Quantitative%20and%20Qualitative%20Survey" title=" Quantitative and Qualitative Survey"> Quantitative and Qualitative Survey</a> </p> <a href="https://publications.waset.org/abstracts/66711/investigating-transformative-processes-through-personal-social-professional-and-educational-development-of-adult-graduates-in-second-chance-schools-in-greece-a-quantitative-and-qualitative-survey-throughout-the-country" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/66711.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">283</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">645</span> Analysis of Facial Expressions with Amazon Rekognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kashika%20P.%20H.">Kashika P. H.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The development of computer vision systems has been greatly aided by the efficient and precise detection of images and videos. Although the ability to recognize and comprehend images is a strength of the human brain, employing technology to tackle this issue is exceedingly challenging. In the past few years, the use of Deep Learning algorithms to treat object detection has dramatically expanded. One of the key issues in the realm of image recognition is the recognition and detection of certain notable people from randomly acquired photographs. Face recognition uses a way to identify, assess, and compare faces for a variety of purposes, including user identification, user counting, and classification. With the aid of an accessible deep learning-based API, this article intends to recognize various faces of people and their facial descriptors more accurately. The purpose of this study is to locate suitable individuals and deliver accurate information about them by using the Amazon Rekognition system to identify a specific human from a vast image dataset. We have chosen the Amazon Rekognition system, which allows for more accurate face analysis, face comparison, and face search, to tackle this difficulty. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amazon%20rekognition" title="Amazon rekognition">Amazon rekognition</a>, <a href="https://publications.waset.org/abstracts/search?q=API" title=" API"> API</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20detection" title=" text detection"> text detection</a> </p> <a href="https://publications.waset.org/abstracts/174012/analysis-of-facial-expressions-with-amazon-rekognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174012.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">104</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">644</span> Towards Logical Inference for the Arabic Question-Answering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wided%20Bakari">Wided Bakari</a>, <a href="https://publications.waset.org/abstracts/search?q=Patrice%20Bellot"> Patrice Bellot</a>, <a href="https://publications.waset.org/abstracts/search?q=Omar%20Trigui"> Omar Trigui</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahmoud%20Neji"> Mahmoud Neji</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This article constitutes an opening to think of the modeling and analysis of Arabic texts in the context of a question-answer system. It is a question of exceeding the traditional approaches focused on morphosyntactic approaches. Furthermore, we present a new approach that analyze a text in order to extract correct answers then transform it to logical predicates. In addition, we would like to represent different levels of information within a text to answer a question and choose an answer among several proposed. To do so, we transform both the question and the text into logical forms. Then, we try to recognize all entailment between them. The results of recognizing the entailment are a set of text sentences that can implicate the user’s question. Our work is now concentrated on an implementation step in order to develop a system of question-answering in Arabic using techniques to recognize textual implications. In this context, the extraction of text features (keywords, named entities, and relationships that link them) is actually considered the first step in our process of text modeling. The second one is the use of techniques of textual implication that relies on the notion of inference and logic representation to extract candidate answers. The last step is the extraction and selection of the desired answer. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=NLP" title="NLP">NLP</a>, <a href="https://publications.waset.org/abstracts/search?q=Arabic%20language" title=" Arabic language"> Arabic language</a>, <a href="https://publications.waset.org/abstracts/search?q=question-answering" title=" question-answering"> question-answering</a>, <a href="https://publications.waset.org/abstracts/search?q=recognition%20text%20entailment" title=" recognition text entailment"> recognition text entailment</a>, <a href="https://publications.waset.org/abstracts/search?q=logic%20forms" title=" logic forms"> logic forms</a> </p> <a href="https://publications.waset.org/abstracts/27265/towards-logical-inference-for-the-arabic-question-answering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/27265.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">342</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">643</span> A Method for False Alarm Recognition Based on Multi-Classification Support Vector Machine</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Weiwei%20Cui">Weiwei Cui</a>, <a href="https://publications.waset.org/abstracts/search?q=Dejian%20Lin"> Dejian Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=Leigang%20Zhang"> Leigang Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yao%20Wang"> Yao Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Zheng%20Sun"> Zheng Sun</a>, <a href="https://publications.waset.org/abstracts/search?q=Lianfeng%20Li"> Lianfeng Li</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Built-in test (BIT) is an important technology in testability field, and it is widely used in state monitoring and fault diagnosis. With the improvement of modern equipment performance and complexity, the scope of BIT becomes larger, and it leads to the emergence of false alarm problem. The false alarm makes the health assessment unstable, and it reduces the effectiveness of BIT. The conventional false alarm suppression methods such as repeated test and majority voting cannot meet the requirement for a complicated system, and the intelligence algorithms such as artificial neural networks (ANN) are widely studied and used. However, false alarm has a very low frequency and small sample, yet a method based on ANN requires a large size of training sample. To recognize the false alarm, we propose a method based on multi-classification support vector machine (SVM) in this paper. Firstly, we divide the state of a system into three states: healthy, false-alarm, and faulty. Then we use multi-classification with '1 vs 1' policy to train and recognize the state of a system. Finally, an example of fault injection system is taken to verify the effectiveness of the proposed method by comparing ANN. The result shows that the method is reasonable and effective. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=false%20alarm" title="false alarm">false alarm</a>, <a href="https://publications.waset.org/abstracts/search?q=fault%20diagnosis" title=" fault diagnosis"> fault diagnosis</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a>, <a href="https://publications.waset.org/abstracts/search?q=k-means" title=" k-means"> k-means</a>, <a href="https://publications.waset.org/abstracts/search?q=BIT" title=" BIT"> BIT</a> </p> <a href="https://publications.waset.org/abstracts/87531/a-method-for-false-alarm-recognition-based-on-multi-classification-support-vector-machine" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87531.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">642</span> Investigation of Surface Electromyograph Signal Acquired from the around Shoulder Muscles of Upper Limb Amputees</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amanpreet%20Kaur">Amanpreet Kaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Ravinder%20Agarwal"> Ravinder Agarwal</a>, <a href="https://publications.waset.org/abstracts/search?q=Amod%20Kumar"> Amod Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Surface electromyography is a strategy to measure the muscle activity of the skin. Sensors placed on the skin recognize the electrical current or signal generated by active muscles. A lot of the research has focussed on the detection of signal from upper limb amputee with activity of triceps and biceps muscles. The purpose of this study was to correlate phantom movement and sEMG activity in residual stump muscles of transhumeral amputee from the shoulder muscles. Eight non- amputee and seven right hand amputees were recruited for this study. sEMG data were collected for the trapezius, pectoralis and teres muscles for elevation, protraction and retraction of shoulder. Contrast between the amputees and non-amputees muscles action have been investigated. Subsequently, to investigate the impact of class separability for different motions of shoulder, analysis of variance for experimental recorded data was carried out. Results were analyzed to recognize different shoulder movements and represent a step towards the surface electromyography controlled system for amputees. Difference in F ratio (p < 0.05) values indicates the distinction in mean therefore these analysis helps to determine the independent motion. The identified signal would be used to design more accurate and efficient controllers for the upper-limb amputee for researchers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=around%20shoulder%20amputation" title="around shoulder amputation">around shoulder amputation</a>, <a href="https://publications.waset.org/abstracts/search?q=surface%20electromyography" title=" surface electromyography"> surface electromyography</a>, <a href="https://publications.waset.org/abstracts/search?q=analysis%20of%20variance" title=" analysis of variance"> analysis of variance</a>, <a href="https://publications.waset.org/abstracts/search?q=features" title=" features"> features</a> </p> <a href="https://publications.waset.org/abstracts/64762/investigation-of-surface-electromyograph-signal-acquired-from-the-around-shoulder-muscles-of-upper-limb-amputees" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/64762.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">433</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">641</span> An Investigation into the Impact of Techno-Entrepreneurship Education on Self-Employment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Farnaz%20Farzin">Farnaz Farzin</a>, <a href="https://publications.waset.org/abstracts/search?q=Julie%20C.%20Thomson"> Julie C. Thomson</a>, <a href="https://publications.waset.org/abstracts/search?q=Rob%20Dekkers"> Rob Dekkers</a>, <a href="https://publications.waset.org/abstracts/search?q=Geoff%20Whittam"> Geoff Whittam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Research has shown that techno-entrepreneurship is economically significant. Therefore, it is suggested that teaching techno-entrepreneurship may be important because such programmes would prepare current and future generations of learners to recognize and act on high-technology opportunities. Education in techno-entrepreneurship may increase the knowledge of how to start one’s own enterprise and recognize the technological opportunities for commercialisation to improve decision-making about starting a new venture; also it influence decisions about capturing the business opportunities and turning them into successful ventures. Universities can play a main role in connecting and networking techno-entrepreneurship students towards a cooperative attitude with real business practice and industry knowledge. To investigate and answer whether education for techno-entrepreneurs really helps, this paper chooses a comparison of literature reviews as its method of research. Then, 6 different studies were selected. These particular papers were selected based on a keywords search and as their aim, objectives, and gaps were close to the current research. In addition, they were all based on the influence of techno-entrepreneurship education in self-employment and intention of students to start new ventures. The findings showed that teaching techno-entrepreneurship education may have an influence on students’ intention and their future self-employment, but which courses should be covered and the duration of programmes needs further investigation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=techno%20entrepreneurship%20education" title="techno entrepreneurship education">techno entrepreneurship education</a>, <a href="https://publications.waset.org/abstracts/search?q=training" title=" training"> training</a>, <a href="https://publications.waset.org/abstracts/search?q=higher%20education" title=" higher education"> higher education</a>, <a href="https://publications.waset.org/abstracts/search?q=intention" title=" intention"> intention</a>, <a href="https://publications.waset.org/abstracts/search?q=self-employment" title=" self-employment"> self-employment</a> </p> <a href="https://publications.waset.org/abstracts/16034/an-investigation-into-the-impact-of-techno-entrepreneurship-education-on-self-employment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16034.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">337</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">640</span> Real-Time Recognition of Dynamic Hand Postures on a Neuromorphic System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qian%20Liu">Qian Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Steve%20Furber"> Steve Furber</a> </p> <p class="card-text"><strong>Abstract:</strong></p> To explore how the brain may recognize objects in its general,accurate and energy-efficient manner, this paper proposes the use of a neuromorphic hardware system formed from a Dynamic Video Sensor~(DVS) silicon retina in concert with the SpiNNaker real-time Spiking Neural Network~(SNN) simulator. As a first step in the exploration on this platform a recognition system for dynamic hand postures is developed, enabling the study of the methods used in the visual pathways of the brain. Inspired by the behaviours of the primary visual cortex, Convolutional Neural Networks (CNNs) are modeled using both linear perceptrons and spiking Leaky Integrate-and-Fire (LIF) neurons. In this study's largest configuration using these approaches, a network of 74,210 neurons and 15,216,512 synapses is created and operated in real-time using 290 SpiNNaker processor cores in parallel and with 93.0% accuracy. A smaller network using only 1/10th of the resources is also created, again operating in real-time, and it is able to recognize the postures with an accuracy of around 86.4% -only 6.6% lower than the much larger system. The recognition rate of the smaller network developed on this neuromorphic system is sufficient for a successful hand posture recognition system, and demonstrates a much-improved cost to performance trade-off in its approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=spiking%20neural%20network%20%28SNN%29" title="spiking neural network (SNN)">spiking neural network (SNN)</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20network%20%28CNN%29" title=" convolutional neural network (CNN)"> convolutional neural network (CNN)</a>, <a href="https://publications.waset.org/abstracts/search?q=posture%20recognition" title=" posture recognition"> posture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=neuromorphic%20system" title=" neuromorphic system"> neuromorphic system</a> </p> <a href="https://publications.waset.org/abstracts/20330/real-time-recognition-of-dynamic-hand-postures-on-a-neuromorphic-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20330.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">472</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">639</span> Analysis on Yogyakarta Istimewa Citygates on Urban Area Arterial Roads</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nizar%20Caraka%20Trihanasia">Nizar Caraka Trihanasia</a>, <a href="https://publications.waset.org/abstracts/search?q=Suparwoko"> Suparwoko</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this paper is to analyze the design model of city gates on arterial roads as Yogyakarta’s “Istimewa” (special) identity. City marketing has become a trend among cities in the past few years. It began to compete with each other in promoting their identity to the world. One of the easiest ways to recognize the identity is by knowing the image of the city which can be seen through architectural buildings or urban elements. The idea is to recognize how the image of the city can represent Yogyakarta’s identity, which is limited to the contribution of the city gates distinctiveness on Yogyakarta urban area. This study has concentrated on the aspect of city gates as built environment that provides a diversity, configuration and scale of development that promotes a sense of place and community. The visual analysis will be conducted to interpreted the existing Yogyakarta city gates (as built environment) focussing on some variables of 1) character and pattern, 2) circulation system establishment, and 3) open space utilisation. Literature review and site survey are also conducted to understand the relationship between the built environment and the sense of place in the community. This study suggests that visually the Yogyakarta city gate model has strong visual characters and pattern by using the concept of a sense of place of Yogyakarta community value. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20analysis" title="visual analysis">visual analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=model" title=" model"> model</a>, <a href="https://publications.waset.org/abstracts/search?q=Yogyakarta%20%E2%80%9CIstimewa%E2%80%9D" title=" Yogyakarta “Istimewa”"> Yogyakarta “Istimewa”</a>, <a href="https://publications.waset.org/abstracts/search?q=citygates" title=" citygates"> citygates</a> </p> <a href="https://publications.waset.org/abstracts/53761/analysis-on-yogyakarta-istimewa-citygates-on-urban-area-arterial-roads" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53761.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">258</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">638</span> The Time-Frequency Domain Reflection Method for Aircraft Cable Defects Localization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Reza%20Rezaeipour%20Honarmandzad">Reza Rezaeipour Honarmandzad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces an aircraft cable fault detection and location method in light of TFDR keeping in mind the end goal to recognize the intermittent faults adequately and to adapt to the serial and after-connector issues being hard to be distinguished in time domain reflection. In this strategy, the correlation function of reflected and reference signal is used to recognize and find the airplane fault as per the qualities of reflected and reference signal in time-frequency domain, so the hit rate of distinguishing and finding intermittent faults can be enhanced adequately. In the work process, the reflected signal is interfered by the noise and false caution happens frequently, so the threshold de-noising technique in light of wavelet decomposition is used to diminish the noise interference and lessen the shortcoming alert rate. At that point the time-frequency cross connection capacity of the reference signal and the reflected signal based on Wigner-Ville appropriation is figured so as to find the issue position. Finally, LabVIEW is connected to execute operation and control interface, the primary capacity of which is to connect and control MATLAB and LABSQL. Using the solid computing capacity and the bottomless capacity library of MATLAB, the signal processing turn to be effortlessly acknowledged, in addition LabVIEW help the framework to be more dependable and upgraded effectively. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aircraft%20cable" title="aircraft cable">aircraft cable</a>, <a href="https://publications.waset.org/abstracts/search?q=fault%20location" title=" fault location"> fault location</a>, <a href="https://publications.waset.org/abstracts/search?q=TFDR" title=" TFDR"> TFDR</a>, <a href="https://publications.waset.org/abstracts/search?q=LabVIEW" title=" LabVIEW"> LabVIEW</a> </p> <a href="https://publications.waset.org/abstracts/35568/the-time-frequency-domain-reflection-method-for-aircraft-cable-defects-localization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35568.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">476</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">637</span> Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kothuri%20Sriraman">Kothuri Sriraman</a>, <a href="https://publications.waset.org/abstracts/search?q=Mattupalli%20Komal%20Teja"> Mattupalli Komal Teja</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chain%20code" title="chain code">chain code</a>, <a href="https://publications.waset.org/abstracts/search?q=Hull%20regions" title=" Hull regions"> Hull regions</a>, <a href="https://publications.waset.org/abstracts/search?q=Hough%20transform" title=" Hough transform"> Hough transform</a>, <a href="https://publications.waset.org/abstracts/search?q=Hull%20recognition" title=" Hull recognition"> Hull recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=Layered%20Outline%20Extraction" title=" Layered Outline Extraction"> Layered Outline Extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=SASK%20algorithm" title=" SASK algorithm"> SASK algorithm</a> </p> <a href="https://publications.waset.org/abstracts/15674/recognition-and-counting-algorithm-for-sub-regional-objects-in-a-handwritten-image-through-image-sets" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15674.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">348</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">636</span> Scentscape of the Soul as a Direct Channel of Communication with the Psyche and Physical Body</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Elena%20Roadhouse">Elena Roadhouse</a> </p> <p class="card-text"><strong>Abstract:</strong></p> “When it take the kitchen middens from the latest canning session out to the compost before going to bed, the orchestra is in full chorus. Night vapors and scents from the earth mingle with the fragrance of honeysuckle nearby and basil grown in the compost. They merge into the rhythmic pulse of night”. William Longgood Carl Jung did not specifically recognize scent and olfactory function as a window into the psyche. He did recognize instinct and the natural history of mankind as key to understanding and reconnecting with the Psyche. The progressive path of modern humans has brought incredible scientific and industrial advancements that have changed the human relationship with Mother Earth, the primal wisdom of mankind, and led to the loss of instinct. The olfactory bulbs are an integral part of our ancient brain and has evolved in a way that is proportional to the human separation with the instinctual self. If olfaction is a gateway to our instinct, then it is also a portal to the soul. Natural aromatics are significant and powerful instruments for supporting the mind, our emotional selves, and our bodies. This paper aims to shed light on the important role of scent in the understanding of the existence of the psyche, generational trauma, and archetypal fragrance. Personalized Natural Perfume combined with mindfulness practices can be used as an effective behavioral conditioning tool to promote the healing of transgenerational and individual trauma, the fragmented self, and the physical body. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=scentscape%20of%20the%20soul" title="scentscape of the soul">scentscape of the soul</a>, <a href="https://publications.waset.org/abstracts/search?q=psyche" title=" psyche"> psyche</a>, <a href="https://publications.waset.org/abstracts/search?q=individuation" title=" individuation"> individuation</a>, <a href="https://publications.waset.org/abstracts/search?q=epigenetics" title=" epigenetics"> epigenetics</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20psychology" title=" depth psychology"> depth psychology</a>, <a href="https://publications.waset.org/abstracts/search?q=carl%20Jung" title=" carl Jung"> carl Jung</a>, <a href="https://publications.waset.org/abstracts/search?q=instinct" title=" instinct"> instinct</a>, <a href="https://publications.waset.org/abstracts/search?q=trauma" title=" trauma"> trauma</a>, <a href="https://publications.waset.org/abstracts/search?q=archetypal%20scent" title=" archetypal scent"> archetypal scent</a>, <a href="https://publications.waset.org/abstracts/search?q=personal%20myth" title=" personal myth"> personal myth</a>, <a href="https://publications.waset.org/abstracts/search?q=holistic%20wellness" title=" holistic wellness"> holistic wellness</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20perfumery" title=" natural perfumery"> natural perfumery</a> </p> <a href="https://publications.waset.org/abstracts/149143/scentscape-of-the-soul-as-a-direct-channel-of-communication-with-the-psyche-and-physical-body" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149143.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">104</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">635</span> [Keynote Talk]: The Intoxicated Eyewitness: Effect of Alcohol Consumption on Identification Accuracy in Lineup</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vikas%20S.%20Minchekar">Vikas S. Minchekar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The eyewitness is a crucial source of evidence in the criminal judicial system. However, rely on the reminiscence of an eyewitness especially intoxicated eyewitness is not always judicious. It might lead to some serious consequences. Day by day, alcohol-related crimes or the criminal incidences in bars, nightclubs, and restaurants are increasing rapidly. Tackling such cases is very complicated to any investigation officers. The people in that incidents are violated due to the alcohol consumption hence, their ability to identify the suspects or recall these phenomena is affected. The studies on the effects of alcohol consumption on motor activities such as driving and surgeries have received much attention. However, the effect of alcohol intoxication on memory has received little attention from the psychology, law, forensic and criminology scholars across the world. In the Indian context, the published articles on this issue are equal to none up to present day. This field experiment investigation aimed at to finding out the effect of alcohol consumption on identification accuracy in lineups. Forty adult, social drinkers, and twenty sober adults were randomly recruited for the study. The sober adults were assigned into 'placebo' beverage group while social drinkers were divided into two group e. g. 'low dose' of alcohol (0.2 g/kg) and 'high dose' of alcohol (0.8 g/kg). The social drinkers were divided in such a way that their level of blood-alcohol concentration (BAC) will become different. After administering the beverages for the placebo group and liquor to the social drinkers for 40 to 50 minutes of the period, the five-minute video clip of mock crime is shown to all in a group of four to five members. After the exposure of video, clip subjects were given 10 portraits and asked them to recognize whether they are involved in mock crime or not. Moreover, they were also asked to describe the incident. The subjects were given two opportunities to recognize the portraits and to describe the events; the first opportunity is given immediately after the video clip and the second was 24 hours later. The obtained data were analyzed by one-way ANOVA and Scheffe’s posthoc multiple comparison tests. The results indicated that the 'high dose' group is remarkably different from the 'placebo' and 'low dose' groups. But, the 'placebo' and 'low dose' groups are equally performed. The subjects in a 'high dose' group recognized only 20% faces correctly while the subjects in a 'placebo' and 'low dose' groups are recognized 90 %. This study implied that the intoxicated witnesses are less accurate to recognize the suspects and also less capable of describing the incidents where crime has taken place. Moreover, this study does not assert that intoxicated eyewitness is generally less trustworthy than their sober counterparts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=intoxicated%20eyewitness" title="intoxicated eyewitness">intoxicated eyewitness</a>, <a href="https://publications.waset.org/abstracts/search?q=memory" title=" memory"> memory</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20drinkers" title=" social drinkers"> social drinkers</a>, <a href="https://publications.waset.org/abstracts/search?q=lineups" title=" lineups"> lineups</a> </p> <a href="https://publications.waset.org/abstracts/61407/keynote-talk-the-intoxicated-eyewitness-effect-of-alcohol-consumption-on-identification-accuracy-in-lineup" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61407.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">268</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">634</span> Deliberate Learning and Practice: Enhancing Situated Learning Approach in Professional Communication Course</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Susan%20Lee">Susan Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Situated learning principles are adopted in the design of the module, professional communication, in its iteration of tasks and assignments to create a learning environment that simulates workplace reality. The success of situated learning is met when students are able to transfer and apply their skills beyond the classroom, in their personal life, and workplace. The learning process should help students recognize the relevance and opportunities for application. In the module’s learning component on negotiation, cases are created based on scenarios inspired by industry practices. The cases simulate scenarios that students on the course may encounter when they enter the workforce when they take on executive roles in the real estate sector. Engaging in the cases has enhanced students’ learning experience as they apply interpersonal communication skills in negotiation contexts of executives. Through the process of case analysis, role-playing, and peer feedback, students are placed in an experiential learning space to think and act in a deliberate manner not only as students but as professionals they will graduate to be. The immersive skills practices enable students to continuously apply a range of verbal and non-verbal communication skills purposefully as they stage their negotiations. The theme in students' feedback resonates with their awareness of the authentic and workplace experiences offered through visceral role-playing. Students also note relevant opportunities for the future transfer of the skills acquired. This indicates that students recognize the possibility of encountering similar negotiation episodes in the real world and realize they possess the negotiation tools and communication skills to deliberately apply them when these opportunities arise outside the classroom. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deliberate%20practice" title="deliberate practice">deliberate practice</a>, <a href="https://publications.waset.org/abstracts/search?q=interpersonal%20communication%20skills" title=" interpersonal communication skills"> interpersonal communication skills</a>, <a href="https://publications.waset.org/abstracts/search?q=role-play" title=" role-play"> role-play</a>, <a href="https://publications.waset.org/abstracts/search?q=situated%20learning" title=" situated learning"> situated learning</a> </p> <a href="https://publications.waset.org/abstracts/138338/deliberate-learning-and-practice-enhancing-situated-learning-approach-in-professional-communication-course" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138338.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">214</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">633</span> Developmental Psycholinguistic Approach to Conversational Skills: A Continuum of the Sensitivity to Gricean Maxims</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zsuzsanna%20Schnell">Zsuzsanna Schnell</a>, <a href="https://publications.waset.org/abstracts/search?q=Francesca%20Ervas"> Francesca Ervas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Our experimental pragmatic study confirms a basic tenet in the Relevance of theoretical views in language philosophy. It draws up a developmental trajectory of the maxims, revealing the cognitive difficulty of their interpretation, their relative place to each other, and the order they may follow in development. A central claim of the present research is that social-cognitive skills play a significant role in inferential meaning construction. Children passing the False Belief Test are significantly more successful in tasks measuring the recognition of the infringement of conversational maxims. Aims and method: We examine preschoolers' conversational and pragmatic competence in view of their mentalization skills. To do so, we use a measure of linguistic tasks containing 5 short scenarios for each Gricean maxim. We measure preschoolers’ ToM performance with a first- and second-order ToM task and compare participants’ ability to recognize the infringement of the Gricean maxims in view of their social cognitive skills. Results: Findings suggest that Theory of Mind has a predictive force of 75% concerning the ability to follow Gricean maxims efficiently. ToM proved to be a significant factor in predicting the group’s performance and success rates in 3 out of 4 maxim infringement recognition tasks: in the Quantity, Relevance and Manner conditions, but not in the Quality trial. Conclusions: Our results confirm that children’s communicative competence in social contexts requires the development of higher-order social-cognitive reasoning. They reveal the cognitive effort needed to recognize the infringement of each maxim, yielding a continuum of their cognitive difficulty and trajectory of development. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=developmental%20pragmatics" title="developmental pragmatics">developmental pragmatics</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20cognition" title=" social cognition"> social cognition</a>, <a href="https://publications.waset.org/abstracts/search?q=preschoolers" title=" preschoolers"> preschoolers</a>, <a href="https://publications.waset.org/abstracts/search?q=maxim%20infringement" title=" maxim infringement"> maxim infringement</a>, <a href="https://publications.waset.org/abstracts/search?q=Gricean%20pragmatics" title=" Gricean pragmatics"> Gricean pragmatics</a> </p> <a href="https://publications.waset.org/abstracts/188865/developmental-psycholinguistic-approach-to-conversational-skills-a-continuum-of-the-sensitivity-to-gricean-maxims" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188865.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">30</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">632</span> Leadership in the Era of AI: Growing Organizational Intelligence</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mark%20Salisbury">Mark Salisbury</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The arrival of artificially intelligent avatars and the automation they bring is worrying many of us, not only for our livelihood but for the jobs that may be lost to our kids. We worry about what our place will be as human beings in this new economy where much of it will be conducted online in the metaverse – in a network of 3D virtual worlds – working with intelligent machines. The Future of Leadership was written to address these fears and show what our place will be – the right place – in this new economy of AI avatars, automation, and 3D virtual worlds. But to be successful in this new economy, our job will be to bring wisdom to our workplace and the marketplace. And we will use AI avatars and 3D virtual worlds to do it. However, this book is about more than AI and the avatars that we will work with in the metaverse. It’s about building Organizational intelligence (OI) -- the capability of an organization to comprehend and create knowledge relevant to its purpose; in other words, it is the intellectual capacity of the entire organization. To increase organizational intelligence requires a new kind of knowledge worker, a wisdom worker, that requires a new kind of leadership. This book begins your story for how to become a leader of wisdom workers and be successful in the emerging wisdom economy. After this presentation, conference participants will be able to do the following: Recognize the characteristics of the new generation of wisdom workers and how they differ from their predecessors. Recognize that new leadership methods and techniques are needed to lead this new generation of wisdom workers. Apply personal and professional values – personal integrity, belief in something larger than yourself, and keeping the best interest of others in mind – to improve your work performance and lead others. Exhibit an attitude of confidence, courage, and reciprocity of sharing knowledge to increase your productivity and influence others. Leverage artificial intelligence to accelerate your ability to learn, augment your decision-making, and influence others.Utilize new technologies to communicate with human colleagues and intelligent machines to develop better solutions more quickly. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=metaverse" title="metaverse">metaverse</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20artificial%20intelligence" title=" generative artificial intelligence"> generative artificial intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=automation" title=" automation"> automation</a>, <a href="https://publications.waset.org/abstracts/search?q=leadership" title=" leadership"> leadership</a>, <a href="https://publications.waset.org/abstracts/search?q=organizational%20intelligence" title=" organizational intelligence"> organizational intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=wisdom%20worker" title=" wisdom worker"> wisdom worker</a> </p> <a href="https://publications.waset.org/abstracts/183106/leadership-in-the-era-of-ai-growing-organizational-intelligence" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183106.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">43</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">631</span> Cloud Computing: Deciding Whether It Is Easier or Harder to Defend Against Cyber Attacks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Emhemed%20Shaklawoon">Emhemed Shaklawoon</a>, <a href="https://publications.waset.org/abstracts/search?q=Ibrahim%20Althomali"> Ibrahim Althomali</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We propose that we identify different defense mechanisms that were used before the introduction of the cloud and compare if their protection mechanisms are still valuable and to what degree. Note that in order to defend against vulnerability, we must know how this vulnerability is abused in an attack. Only then, we will be able to recognize if it is easier or harder to defend against cyber attacks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cloud%20computing" title="cloud computing">cloud computing</a>, <a href="https://publications.waset.org/abstracts/search?q=privacy" title=" privacy"> privacy</a>, <a href="https://publications.waset.org/abstracts/search?q=cyber%20attacks" title=" cyber attacks"> cyber attacks</a>, <a href="https://publications.waset.org/abstracts/search?q=defend%20the%20cloud" title=" defend the cloud"> defend the cloud</a> </p> <a href="https://publications.waset.org/abstracts/19135/cloud-computing-deciding-whether-it-is-easier-or-harder-to-defend-against-cyber-attacks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19135.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">422</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">630</span> Information Literacy Initiatives in India in Present Era Age</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Darshan%20Lal">Darshan Lal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper describes the concept of Information literacy. It is a critical component of this information age. Information literacy is the vital process in modern changing world. Information Literacy initiatives in India was also discussed. Paper also discussed Information literacy programmes for LIS professionals. Information literacy makes person capable to recognize when information is needed and how to locate, evaluate and use effectively of the needed information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=information%20literacy" title="information literacy">information literacy</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20communication%20technology%20%28ICT%29" title=" information communication technology (ICT)"> information communication technology (ICT)</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20literacy%20programmes" title=" information literacy programmes"> information literacy programmes</a> </p> <a href="https://publications.waset.org/abstracts/28560/information-literacy-initiatives-in-india-in-present-era-age" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/28560.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">371</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">629</span> Epidemiology and Jeopardy Aspect of Febrile Neutropenia Patients by Means of Infectious Maladies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pouya%20Karimi">Pouya Karimi</a>, <a href="https://publications.waset.org/abstracts/search?q=Ramin%20Ghasemi%20Shayan"> Ramin Ghasemi Shayan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Conclusions of the sort and setting of observational treatment for immunocompromised patients with fever are confused by the qualities of the hidden disease and the impacts of medications previously got, just as by changing microbiological examples and patterns in sedate obstruction at national and institutional levels. A few frameworks have been proposed to recognize patients who could profit by outpatient anti-infection treatment from patients who require hospitalization. Useful contemplations may choose whether the fundamental checking during the time of neutropenia can be accomplished. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=microbiology" title="microbiology">microbiology</a>, <a href="https://publications.waset.org/abstracts/search?q=infectious" title=" infectious"> infectious</a>, <a href="https://publications.waset.org/abstracts/search?q=neutropenia" title=" neutropenia"> neutropenia</a>, <a href="https://publications.waset.org/abstracts/search?q=epidemiology" title=" epidemiology "> epidemiology </a> </p> <a href="https://publications.waset.org/abstracts/123368/epidemiology-and-jeopardy-aspect-of-febrile-neutropenia-patients-by-means-of-infectious-maladies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/123368.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">162</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">628</span> The Role of Immunologic Diamonds in Dealing with Mycobacterium Tuberculosis; Responses of Immune Cells in Affliction to the Respiratory Tuberculosis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Seyyed%20Mohammad%20Amin%20Mousavi%20Sagharchi">Seyyed Mohammad Amin Mousavi Sagharchi</a>, <a href="https://publications.waset.org/abstracts/search?q=Elham%20Javanroudi"> Elham Javanroudi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: Tuberculosis (TB) is a known disease with hidden features caused by Mycobacterium tuberculosis (MTB). This disease, which is one of the 10 deadliest in the world, has caused millions of deaths in recent decades. Furthermore, TB is responsible for infecting about 30% population of world. Like any infection, TB can activate the immune system by locating and colonization in the human body, especially in the alveoli. TB is granulomatosis, so MTB can absorb the host’s immune cells and other cells to form granuloma. Method: Different databases (e.g., PubMed) were recruited to prepare this paper and fulfill our goals to search and find effective papers and investigations. Results: Immune response to MTB is related to T cell killers and contains CD1, CD4, and CD8 T lymphocytes. CD1 lymphocytes can recognize glycolipids, which highly exist in the Mycobacterial fatty cell wall. CD4 lymphocytes and macrophages form granuloma, and it is the main line of immune response to Mycobacteria. On the other hand, CD8 cells have cytolytic function for directly killing MTB by secretion of granulysin. Other functions and secretion to the deal are interleukin-12 (IL-12) by induction of expression interferon-γ (INF-γ) for macrophages activation and creating a granuloma, and tumor necrosis factor (TNF) by promoting macrophage phagolysosomal fusion. Conclusion: Immune cells in battle with MTB are macrophages, dendritic cells (DCs), neutrophils, and natural killer (NK) cells. These immune cells can recognize the Mycobacterium by various receptors, including Toll-like receptors (TLRs), Nod-like receptors (NLRs), and C-type lectin receptors (CLRs) located in the cell surface. In human alveoli exist about 50 dendritic macrophages, which have close communication with other immune cells in the circulating system and epithelial cells to deal with Mycobacteria. Against immune cells, MTB handles some factors (e.g., cordfactor, O-Ag, lipoarabinomannan, sulfatides, and adenylate cyclase) and practical functions (e.g., inhibition of macrophages). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mycobacterium%20tuberculosis" title="mycobacterium tuberculosis">mycobacterium tuberculosis</a>, <a href="https://publications.waset.org/abstracts/search?q=immune%20responses" title=" immune responses"> immune responses</a>, <a href="https://publications.waset.org/abstracts/search?q=immunological%20mechanisms" title=" immunological mechanisms"> immunological mechanisms</a>, <a href="https://publications.waset.org/abstracts/search?q=respiratory%20tuberculosis" title=" respiratory tuberculosis"> respiratory tuberculosis</a> </p> <a href="https://publications.waset.org/abstracts/165031/the-role-of-immunologic-diamonds-in-dealing-with-mycobacterium-tuberculosis-responses-of-immune-cells-in-affliction-to-the-respiratory-tuberculosis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165031.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">109</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognize&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognize&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognize&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognize&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognize&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognize&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognize&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognize&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognize&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognize&page=21">21</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognize&page=22">22</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=recognize&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>