CINXE.COM

Search results for: Bag of Visual Words (BOVW)

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: Bag of Visual Words (BOVW)</title> <meta name="description" content="Search results for: Bag of Visual Words (BOVW)"> <meta name="keywords" content="Bag of Visual Words (BOVW)"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="Bag of Visual Words (BOVW)" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="Bag of Visual Words (BOVW)"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3107</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: Bag of Visual Words (BOVW)</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2837</span> Similar Script Character Recognition on Kannada and Telugu</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gurukiran%20Veerapur">Gurukiran Veerapur</a>, <a href="https://publications.waset.org/abstracts/search?q=Nytik%20Birudavolu"> Nytik Birudavolu</a>, <a href="https://publications.waset.org/abstracts/search?q=Seetharam%20U.%20N."> Seetharam U. N.</a>, <a href="https://publications.waset.org/abstracts/search?q=Chandravva%20Hebbi"> Chandravva Hebbi</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Praneeth%20Reddy"> R. Praneeth Reddy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=base%20characters" title="base characters">base characters</a>, <a href="https://publications.waset.org/abstracts/search?q=modifiers" title=" modifiers"> modifiers</a>, <a href="https://publications.waset.org/abstracts/search?q=guninthalu" title=" guninthalu"> guninthalu</a>, <a href="https://publications.waset.org/abstracts/search?q=aksharas" title=" aksharas"> aksharas</a>, <a href="https://publications.waset.org/abstracts/search?q=vattakshara" title=" vattakshara"> vattakshara</a>, <a href="https://publications.waset.org/abstracts/search?q=VAN" title=" VAN"> VAN</a> </p> <a href="https://publications.waset.org/abstracts/184438/similar-script-character-recognition-on-kannada-and-telugu" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/184438.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">53</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2836</span> Post Liberal Perspective on Minorities Visibility in Contemporary Visual Culture: The Case of Mizrahi Jews</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Merav%20Alush%20Levron">Merav Alush Levron</a>, <a href="https://publications.waset.org/abstracts/search?q=Sivan%20Rajuan%20Shtang"> Sivan Rajuan Shtang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> From as early as their emergence in Europe and the US, postmodern and post-colonial paradigm have formed the backbone of the visual culture field of study. The self-representation project of political minorities is studied, described and explained within the premises and perspectives drawn from these paradigms, addressing the key issues they had raised: modernism’s crisis of representation. The struggle for self-representation, agency and multicultural visibility sought to challenge the liberal pretense of universality and equality, hitting at its different blind spots, on issues such as class, gender, race, sex, and nationality. This struggle yielded subversive identity and hybrid performances, including reclaiming, mimicry and masquerading. These performances sought to defy the uniform, universal self, which forms the basis for the liberal, rational, enlightened subject. The argument of this research runs that this politics of representation itself is confined within liberal thought. Alongside post-colonialism and multiculturalism’s contribution in undermining oppressive structures of power, generating diversity in cultural visibility, and exposing the failure of liberal colorblindness, this subversion is constituted in the visual field by way of confrontation, flying in the face of the universal law and relying on its ongoing comparison and attribution to this law. Relying on Deleuze and Guattari, this research set out to draw theoretic and empiric attention to an alternative, post-liberal occurrence which has been taking place in the visual field in parallel to the contra-hegemonic phase and as a product of political reality in the aftermath of the crisis of representation. It is no longer a counter-representation; rather, it is a motion of organic minor desire, progressing in the form of flows and generating what Deleuze and Guattari termed deterritorialization of social structures. This discussion shall have its focus on current post-liberal performances of ‘Mizrahim’ (Jewish Israelis of Arab and Muslim extraction) in the visual field in Israel. In television, video art and photography, these performances challenge the issue of representation and generate concrete peripheral Mizrahiness, realized in the visual organization of the photographic frame. Mizrahiness then transforms from ‘confrontational’ representation into a 'presence', flooding the visual sphere in our plain sight, in a process of 'becoming'. The Mizrahi desire is exerted on the plains of sound, spoken language, the body and the space where they appear. It removes from these plains the coding and stratification engendered by European dominance and rational, liberal enlightenment. This stratification, adhering to the hegemonic surface, is flooded not by way of resisting false consciousness or employing hybridity, but by way of the Mizrahi identity’s own productive, material immanent yearning. The Mizrahi desire reverberates with Mizrahi peripheral 'worlds of meaning', where post-colonial interpretation almost invariably identifies a product of internalized oppression, and a recurrence thereof, rather than a source in itself - an ‘offshoot, never a wellspring’, as Nissim Mizrachi clarifies in his recent pioneering work. The peripheral Mizrahi performance ‘unhook itself’, in Deleuze and Guattari words, from the point of subjectification and interpretation and does not correspond with the partialness, absence, and split that mark post-colonial identities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=desire" title="desire">desire</a>, <a href="https://publications.waset.org/abstracts/search?q=minority" title=" minority"> minority</a>, <a href="https://publications.waset.org/abstracts/search?q=Mizrahi%20Jews" title=" Mizrahi Jews"> Mizrahi Jews</a>, <a href="https://publications.waset.org/abstracts/search?q=post-colonialism" title=" post-colonialism"> post-colonialism</a>, <a href="https://publications.waset.org/abstracts/search?q=post-liberalism" title=" post-liberalism"> post-liberalism</a>, <a href="https://publications.waset.org/abstracts/search?q=visibility" title=" visibility"> visibility</a>, <a href="https://publications.waset.org/abstracts/search?q=Deleuze%20and%20Guattari" title=" Deleuze and Guattari"> Deleuze and Guattari</a> </p> <a href="https://publications.waset.org/abstracts/63795/post-liberal-perspective-on-minorities-visibility-in-contemporary-visual-culture-the-case-of-mizrahi-jews" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/63795.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">324</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2835</span> The Impact of Anxiety on the Access to Phonological Representations in Beginning Readers and Writers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Regis%20Pochon">Regis Pochon</a>, <a href="https://publications.waset.org/abstracts/search?q=Nicolas%20Stefaniak"> Nicolas Stefaniak</a>, <a href="https://publications.waset.org/abstracts/search?q=Veronique%20Baltazart"> Veronique Baltazart</a>, <a href="https://publications.waset.org/abstracts/search?q=Pamela%20Gobin"> Pamela Gobin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Anxiety is known to have an impact on working memory. In reasoning or memory tasks, individuals with anxiety tend to show longer response times and poorer performance. Furthermore, there is a memory bias for negative information in anxiety. Given the crucial role of working memory in lexical learning, anxious students may encounter more difficulties in learning to read and spell. Anxiety could even affect an earlier learning, that is the activation of phonological representations, which are decisive for the learning of reading and writing. The aim of this study is to compare the access to phonological representations of beginning readers and writers according to their level of anxiety, using an auditory lexical decision task. Eighty students of 6- to 9-years-old completed the French version of the Revised Children's Manifest Anxiety Scale and were then divided into four anxiety groups according to their total score (Low, Median-Low, Median-High and High). Two set of eighty-one stimuli (words and non-words) have been auditory presented to these students by means of a laptop computer. Stimuli words were selected according to their emotional valence (positive, negative, neutral). Students had to decide as quickly and accurately as possible whether the presented stimulus was a real word or not (lexical decision). Response times and accuracy were recorded automatically on each trial. It was anticipated a) longer response times for the Median-High and High anxiety groups in comparison with the two others groups, b) faster response times for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups, c) lower response accuracy for Median-High and High anxiety groups in comparison with the two others groups, d) better response accuracy for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups. Concerning the response times, our results showed no difference between the four groups. Furthermore, inside each group, the average response times was very close regardless the emotional valence. Otherwise, group differences appear when considering the error rates. Median-High and High anxiety groups made significantly more errors in lexical decision than Median-Low and Low groups. Better response accuracy, however, is not found for negative-valence words in comparison with positive and neutral-valence words in the Median-High and High anxiety groups. Thus, these results showed a lower response accuracy for above-median anxiety groups than below-median groups but without specificity for the negative-valence words. This study suggests that anxiety can negatively impact the lexical processing in young students. Although the lexical processing speed seems preserved, the accuracy of this processing may be altered in students with moderate or high level of anxiety. This finding has important implication for the prevention of reading and spelling difficulties. Indeed, during these learnings, if anxiety affects the access to phonological representations, anxious students could be disturbed when they have to match phonological representations with new orthographic representations, because of less efficient lexical representations. This study should be continued in order to precise the impact of anxiety on basic school learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=anxiety" title="anxiety">anxiety</a>, <a href="https://publications.waset.org/abstracts/search?q=emotional%20valence" title=" emotional valence"> emotional valence</a>, <a href="https://publications.waset.org/abstracts/search?q=childhood" title=" childhood"> childhood</a>, <a href="https://publications.waset.org/abstracts/search?q=lexical%20access" title=" lexical access"> lexical access</a> </p> <a href="https://publications.waset.org/abstracts/55653/the-impact-of-anxiety-on-the-access-to-phonological-representations-in-beginning-readers-and-writers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/55653.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">288</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2834</span> Residential Architecture and Its Representation in Movies: Bangkok&#039;s Spatial Research in the Study of Thai Cinematography</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Janis%20Matvejs">Janis Matvejs</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual representation of a city creates unique perspectives that allow to interpret the urban environment and enable to understand a space that is culturally created and territorially organized. Residential complexes are an essential part of cities and cinema is a specific representation form of these areas. There has been very little research done on exploring how these areas are depicted in the Thai movies. The aim of this research is to interpret the discourse of residential areas of Bangkok throughout the 20th and 21st centuries and to examine essential changes in the residential structure. Specific cinematic formal techniques in relation to the urban image were used. The movie review results were compared with changes in Bangkok’s residential development. Movie analysis displayed that residential areas are frequently used in Thai cinematography and they make up an integral part of the urban visual perception. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bangkok" title="Bangkok">Bangkok</a>, <a href="https://publications.waset.org/abstracts/search?q=cinema" title=" cinema"> cinema</a>, <a href="https://publications.waset.org/abstracts/search?q=residential%20area" title=" residential area"> residential area</a>, <a href="https://publications.waset.org/abstracts/search?q=representation" title=" representation"> representation</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perception" title=" visual perception"> visual perception</a> </p> <a href="https://publications.waset.org/abstracts/81904/residential-architecture-and-its-representation-in-movies-bangkoks-spatial-research-in-the-study-of-thai-cinematography" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81904.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">194</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2833</span> Study of Icons in Enterprise Application Software Context </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shiva%20Subhedar">Shiva Subhedar</a>, <a href="https://publications.waset.org/abstracts/search?q=Abhishek%20Jain"> Abhishek Jain</a>, <a href="https://publications.waset.org/abstracts/search?q=Shivin%20Mittal"> Shivin Mittal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Icons are not merely decorative elements in enterprise applications but very often used because of their many advantages such as compactness, visual appeal, etc. Despite these potential advantages, icons often cause usability problems when they are designed without consideration for their many potential downsides. The aim of the current study was to examine the effect of articulatory distance – the distance between the physical appearance of an interface element and what it actually means. In other words, will the subject find the association of the function and its appearance on the interface natural or is the icon difficult for them to associate with its function. We have calculated response time and quality of identification by varying icon concreteness, the context of usage and subject experience in the enterprise context. The subjects were asked to associate icons (prepared for study purpose) with given function options in context and out of context mode. Response time and their selection were recorded for analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=HCI" title="HCI">HCI</a>, <a href="https://publications.waset.org/abstracts/search?q=icons" title=" icons"> icons</a>, <a href="https://publications.waset.org/abstracts/search?q=icon%20concreteness" title=" icon concreteness"> icon concreteness</a>, <a href="https://publications.waset.org/abstracts/search?q=icon%20recognition" title=" icon recognition"> icon recognition</a> </p> <a href="https://publications.waset.org/abstracts/54311/study-of-icons-in-enterprise-application-software-context" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54311.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">258</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2832</span> Identifying Necessary Words for Understanding Academic Articles in English as a Second or a Foreign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stephen%20Wagman">Stephen Wagman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper identifies three common structures in English sentences that are important for understanding academic texts, regardless of the characteristics or background of the readers or whether they are reading English as a second or a foreign language. Adapting a model from the Humanities, the explication of texts used in literary studies, the paper analyses sample sentences to reveal structures that enable the reader not only to decide which words are necessary for understanding the main ideas but to make the decision without knowing the meaning of the words. By their very syntax noun structures point to the key word for understanding them. As a rule, the key noun is followed by easily identifiable prepositions, relative pronouns, or verbs and preceded by single adjectives. With few exceptions, the modifiers are unnecessary for understanding the idea of the sentence. In addition, sentences are often structured by lists in which the items frequently consist of parallel groups of words. The principle of a list is that all the items are similar in meaning and it is not necessary to understand all of the items to understand the point of the list. This principle is especially important when the items are long or there is more than one list in the same sentence. The similarity in meaning of these items enables readers to reduce sentences that are hard to grasp to an understandable core without excessive use of a dictionary. Finally, the idea of subordination and the identification of the subordinate parts of sentences through connecting words makes it possible for readers to focus on main ideas without having to sift through the less important and more numerous secondary structures. Sometimes a main idea requires a subordinate one to complete its meaning, but usually, subordinate ideas are unnecessary for understanding the main point of the sentence and its part in the development of the argument from sentence to sentence. Moreover, the connecting words themselves indicate the functions of the subordinate structures. These most frequently show similarity and difference or reasons and results. Recognition of all of these structures can not only enable students to read more efficiently but to focus their attention on the development of the argument and this rather than a multitude of unknown vocabulary items, the repetition in lists, or the subordination in sentences are the one necessary element for comprehension of academic articles. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=development%20of%20the%20argument" title="development of the argument">development of the argument</a>, <a href="https://publications.waset.org/abstracts/search?q=lists" title=" lists"> lists</a>, <a href="https://publications.waset.org/abstracts/search?q=noun%20structures" title=" noun structures"> noun structures</a>, <a href="https://publications.waset.org/abstracts/search?q=subordination" title=" subordination"> subordination</a> </p> <a href="https://publications.waset.org/abstracts/71236/identifying-necessary-words-for-understanding-academic-articles-in-english-as-a-second-or-a-foreign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/71236.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">246</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2831</span> Online Topic Model for Broadcasting Contents Using Semantic Correlation Information</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chang-Uk%20Kwak">Chang-Uk Kwak</a>, <a href="https://publications.waset.org/abstracts/search?q=Sun-Joong%20Kim"> Sun-Joong Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Seong-Bae%20Park"> Seong-Bae Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Sang-Jo%20Lee"> Sang-Jo Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a method of learning topics for broadcasting contents. There are two kinds of texts related to broadcasting contents. One is a broadcasting script which is a series of texts including directions and dialogues. The other is blogposts which possesses relatively abstracted contents, stories and diverse information of broadcasting contents. Although two texts range over similar broadcasting contents, words in blogposts and broadcasting script are different. In order to improve the quality of topics, it needs a method to consider the word difference. In this paper, we introduce a semantic vocabulary expansion method to solve the word difference. We expand topics of the broadcasting script by incorporating the words in blogposts. Each word in blogposts is added to the most semantically correlated topics. We use word2vec to get the semantic correlation between words in blogposts and topics of scripts. The vocabularies of topics are updated and then posterior inference is performed to rearrange the topics. In experiments, we verified that the proposed method can learn more salient topics for broadcasting contents. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=broadcasting%20script%20analysis" title="broadcasting script analysis">broadcasting script analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=topic%20expansion" title=" topic expansion"> topic expansion</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20correlation%20analysis" title=" semantic correlation analysis"> semantic correlation analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=word2vec" title=" word2vec"> word2vec</a> </p> <a href="https://publications.waset.org/abstracts/43213/online-topic-model-for-broadcasting-contents-using-semantic-correlation-information" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43213.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">251</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2830</span> Rathke’s Cleft Cyst Presenting as Unilateral Visual Field Defect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ritesh%20Verma">Ritesh Verma</a>, <a href="https://publications.waset.org/abstracts/search?q=Manisha%20Rathi"> Manisha Rathi</a>, <a href="https://publications.waset.org/abstracts/search?q=Chand%20Singh%20Dhull"> Chand Singh Dhull</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumit%20Sachdeva"> Sumit Sachdeva</a>, <a href="https://publications.waset.org/abstracts/search?q=Jitender%20Phogat"> Jitender Phogat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A Rathke's cleft cyst is a benign growth found on the pituitary gland in the brain, specifically a fluid-filled cyst in the posterior portion of the anterior pituitary gland. It occurs when the Rathke's pouch does not develop properly and ranges in size from 2 to 40mm in diameter. A 38-year-old male presented to the outpatient department with loss of vision in the inferior quadrant of the left eye since 15 days. Visual acuity was 6/6 in the right eye and 6/9 in the left eye. Visual field analysis by HFA-24-2 revealed an inferior field defect extending to the supero-temporal quadrant in the left eye. MRI brain and orbit was advised to the patient and it revealed a well defined cystic pituitary adenoma indenting left optic nerve near optic chiasm consistent with the diagnosis of Rathke’s cleft cyst (RCC). The patient was referred to neurosurgery department for further management. Symptoms vary greatly between individuals having RCCs. RCCs can be non-functioning, functioning, or both. Besides headaches, neurocognitive deficits are almost always present but have a high rate of immediate reversal if the cyst is properly treated or drained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pituitary%20tumors" title="pituitary tumors">pituitary tumors</a>, <a href="https://publications.waset.org/abstracts/search?q=rathke%E2%80%99s%20cleft%20cyst" title=" rathke’s cleft cyst"> rathke’s cleft cyst</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20field%20defects" title=" visual field defects"> visual field defects</a>, <a href="https://publications.waset.org/abstracts/search?q=vision%20loss" title=" vision loss"> vision loss</a> </p> <a href="https://publications.waset.org/abstracts/84547/rathkes-cleft-cyst-presenting-as-unilateral-visual-field-defect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84547.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">205</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2829</span> EEG-Based Classification of Psychiatric Disorders: Bipolar Mood Disorder vs. Schizophrenia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Han-Jeong%20Hwang">Han-Jeong Hwang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jae-Hyun%20Jo"> Jae-Hyun Jo</a>, <a href="https://publications.waset.org/abstracts/search?q=Fatemeh%20Alimardani"> Fatemeh Alimardani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An accurate diagnosis of psychiatric diseases is a challenging issue, in particular when distinct symptoms for different diseases are overlapped, such as delusions appeared in bipolar mood disorder (BMD) and schizophrenia (SCH). In the present study, we propose a useful way to discriminate BMD and SCH using electroencephalography (EEG). A total of thirty BMD and SCH patients (15 vs. 15) took part in our experiment. EEG signals were measured with nineteen electrodes attached on the scalp using the international 10-20 system, while they were exposed to a visual stimulus flickering at 16 Hz for 95 s. The flickering visual stimulus induces a certain brain signal, known as steady-state visual evoked potential (SSVEP), which is differently observed in patients with BMD and SCH, respectively, in terms of SSVEP amplitude because they process the same visual information in own unique way. For classifying BDM and SCH patients, machine learning technique was employed in which leave-one-out-cross validation was performed. The SSVEPs induced at the fundamental (16 Hz) and second harmonic (32 Hz) stimulation frequencies were extracted using fast Fourier transformation (FFT), and they were used as features. The most discriminative feature was selected using the Fisher score, and support vector machine (SVM) was used as a classifier. From the analysis, we could obtain a classification accuracy of 83.33 %, showing the feasibility of discriminating patients with BMD and SCH using EEG. We expect that our approach can be utilized for psychiatrists to more accurately diagnose the psychiatric disorders, BMD and SCH. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bipolar%20mood%20disorder" title="bipolar mood disorder">bipolar mood disorder</a>, <a href="https://publications.waset.org/abstracts/search?q=electroencephalography" title=" electroencephalography"> electroencephalography</a>, <a href="https://publications.waset.org/abstracts/search?q=schizophrenia" title=" schizophrenia"> schizophrenia</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/60167/eeg-based-classification-of-psychiatric-disorders-bipolar-mood-disorder-vs-schizophrenia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/60167.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">421</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2828</span> The Impact of Scientific Content of National Geographic Channel on Drawing Style of Kindergarten Children</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Amin%20Mousa">Ahmed Amin Mousa</a>, <a href="https://publications.waset.org/abstracts/search?q=Mona%20Yacoub"> Mona Yacoub</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study depends on tracking children style through what they have drawn after being introduced to 16 visual content through National Geographic Abu Dhabi Channel programs and the study of the changing features in their drawings before applying the visual act with them. The researchers used Goodenough-Harris Test to analyse children drawings and to extract the features which changed in their drawing before and after the visual content. The results showed a positive change especially in the shapes of animals and their properties. Children become more aware of animals&rsquo; shapes. The study sample was 220 kindergarten children divided into 130 girls and 90 boys at the Orman Experimental Language School in Dokki, Giza, Egypt. The study results showed an improvement in children drawing with 85% than they were before watching videos. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=National%20Geographic" title="National Geographic">National Geographic</a>, <a href="https://publications.waset.org/abstracts/search?q=children%20drawing" title=" children drawing"> children drawing</a>, <a href="https://publications.waset.org/abstracts/search?q=kindergarten" title=" kindergarten"> kindergarten</a>, <a href="https://publications.waset.org/abstracts/search?q=Goodenough-Harris%20Test" title=" Goodenough-Harris Test "> Goodenough-Harris Test </a> </p> <a href="https://publications.waset.org/abstracts/113329/the-impact-of-scientific-content-of-national-geographic-channel-on-drawing-style-of-kindergarten-children" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/113329.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">152</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2827</span> Influence of Auditory Visual Information in Speech Perception in Children with Normal Hearing and Cochlear Implant</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sachin">Sachin</a>, <a href="https://publications.waset.org/abstracts/search?q=Shantanu%20Arya"> Shantanu Arya</a>, <a href="https://publications.waset.org/abstracts/search?q=Gunjan%20Mehta"> Gunjan Mehta</a>, <a href="https://publications.waset.org/abstracts/search?q=Md.%20Shamim%20Ansari"> Md. Shamim Ansari</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The cross-modal influence of visual information on speech perception can be illustrated by the McGurk effect which is an illusion of hearing of syllable /ta/ when a listener listens one syllable, e.g.: /pa/ while watching a synchronized video recording of syllable, /ka/. The McGurk effect is an excellent tool to investigate multisensory integration in speech perception in both normal hearing and hearing impaired populations. As the visual cue is unaffected by noise, individuals with hearing impairment rely more than normal listeners on the visual cues.However, when non congruent visual and auditory cues are processed together, audiovisual interaction seems to occur differently in normal and persons with hearing impairment. Therefore, this study aims to observe the audiovisual interaction in speech perception in Cochlear Implant users compares the same with normal hearing children. Auditory stimuli was routed through calibrated Clinical audiometer in sound field condition, and visual stimuli were presented on laptop screen placed at a distance of 1m at 0 degree azimuth. Out of 4 presentations, if 3 responses were a fusion, then McGurk effect was considered to be present. The congruent audiovisual stimuli /pa/ /pa/ and /ka/ /ka/ were perceived correctly as ‘‘pa’’ and ‘‘ka,’’ respectively by both the groups. For the non- congruent stimuli /da/ /pa/, 23 children out of 35 with normal hearing and 9 children out of 35 with cochlear implant had a fusion of sounds i.e. McGurk effect was present. For the non-congruent stimulus /pa/ /ka/, 25 children out of 35 with normal hearing and 8 children out of 35 with cochlear implant had fusion of sounds.The children who used cochlear implants for less than three years did not exhibit fusion of sound i.e. McGurk effect was absent in this group of children. To conclude, the results demonstrate that consistent fusion of visual with auditory information for speech perception is shaped by experience with bimodal spoken language during early life. When auditory experience with speech is mediated by cochlear implant, the likelihood of acquiring bimodal fusion is increased and it greatly depends on the age of implantation. All the above results strongly support the need for screening children for hearing capabilities and providing cochlear implants and aural rehabilitation as early as possible. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cochlear%20implant" title="cochlear implant">cochlear implant</a>, <a href="https://publications.waset.org/abstracts/search?q=congruent%20stimuli" title=" congruent stimuli"> congruent stimuli</a>, <a href="https://publications.waset.org/abstracts/search?q=mcgurk%20effect" title=" mcgurk effect"> mcgurk effect</a>, <a href="https://publications.waset.org/abstracts/search?q=non-congruent%20stimuli" title=" non-congruent stimuli"> non-congruent stimuli</a> </p> <a href="https://publications.waset.org/abstracts/52237/influence-of-auditory-visual-information-in-speech-perception-in-children-with-normal-hearing-and-cochlear-implant" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52237.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">308</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2826</span> A Hebbian Neural Network Model of the Stroop Effect</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vadim%20Kulikov">Vadim Kulikov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The classical Stroop effect is the phenomenon that it takes more time to name the ink color of a printed word if the word denotes a conflicting color than if it denotes the same color. Over the last 80 years, there have been many variations of the experiment revealing various mechanisms behind semantic, attentional, behavioral and perceptual processing. The Stroop task is known to exhibit asymmetry. Reading the words out loud is hardly dependent on the ink color, but naming the ink color is significantly influenced by the incongruent words. This asymmetry is reversed, if instead of naming the color, one has to point at a corresponding color patch. Another debated aspects are the notions of automaticity and how much of the effect is due to semantic and how much due to response stage interference. Is automaticity a continuous or an all-or-none phenomenon? There are many models and theories in the literature tackling these questions which will be discussed in the presentation. None of them, however, seems to capture all the findings at once. A computational model is proposed which is based on the philosophical idea developed by the author that the mind operates as a collection of different information processing modalities such as different sensory and descriptive modalities, which produce emergent phenomena through mutual interaction and coherence. This is the framework theory where ‘framework’ attempts to generalize the concepts of modality, perspective and ‘point of view’. The architecture of this computational model consists of blocks of neurons, each block corresponding to one framework. In the simplest case there are four: visual color processing, text reading, speech production and attention selection modalities. In experiments where button pressing or pointing is required, a corresponding block is added. In the beginning, the weights of the neural connections are mostly set to zero. The network is trained using Hebbian learning to establish connections (corresponding to ‘coherence’ in framework theory) between these different modalities. The amount of data fed into the network is supposed to mimic the amount of practice a human encounters, in particular it is assumed that converting written text into spoken words is a more practiced skill than converting visually perceived colors to spoken color-names. After the training, the network performs the Stroop task. The RT’s are measured in a canonical way, as these are continuous time recurrent neural networks (CTRNN). The above-described aspects of the Stroop phenomenon along with many others are replicated. The model is similar to some existing connectionist models but as will be discussed in the presentation, has many advantages: it predicts more data, the architecture is simpler and biologically more plausible. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=connectionism" title="connectionism">connectionism</a>, <a href="https://publications.waset.org/abstracts/search?q=Hebbian%20learning" title=" Hebbian learning"> Hebbian learning</a>, <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20networks" title=" artificial neural networks"> artificial neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=philosophy%20of%20mind" title=" philosophy of mind"> philosophy of mind</a>, <a href="https://publications.waset.org/abstracts/search?q=Stroop" title=" Stroop"> Stroop</a> </p> <a href="https://publications.waset.org/abstracts/52789/a-hebbian-neural-network-model-of-the-stroop-effect" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/52789.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">267</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2825</span> Life-Long Fitness Promotion, Recreational Opportunities-Social Interaction for the Visual Impaired Learner</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zasha%20Romero">Zasha Romero</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This poster will detail a family oriented event which introduced individuals with visual impairments and individuals with secondary disabilities to social interaction and helped promote life-long fitness and recreational skills. Purpose: The poster will detail a workshop conducted for individuals with visual impairments, individuals with secondary disabilities and their families. Methods: Families from all over the South Texas were invited through schools and different non-profit organizations and came together for a day full recreational games in an effort to promote life-long fitness, recreational opportunities as well as social interactions. Some of the activities that participants and their families participated in were tennis, dance, swimming, baseball, etc. all activities were developed to engage the learner with visual impairments as well as secondary disabilities. Implications: This workshop was done in collaboration with different non-profit institutions to create awareness and provide opportunities for physical fitness, social interaction, and life-long fitness skills associated with the activities presented. The workshop provided collaboration amongst different entities and novel ideas to create opportunities for a typically underserved population. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=engagement" title="engagement">engagement</a>, <a href="https://publications.waset.org/abstracts/search?q=awareness" title=" awareness"> awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=underserved%20population" title=" underserved population"> underserved population</a>, <a href="https://publications.waset.org/abstracts/search?q=inclusion" title=" inclusion"> inclusion</a>, <a href="https://publications.waset.org/abstracts/search?q=collaboration" title=" collaboration"> collaboration</a> </p> <a href="https://publications.waset.org/abstracts/14893/life-long-fitness-promotion-recreational-opportunities-social-interaction-for-the-visual-impaired-learner" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14893.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">363</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2824</span> An East-West Trans-Cultural Study: Zen Enlightenment in Asian and John Cage&#039;s Visual Arts</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yu-Shun%20Elisa%20Pong">Yu-Shun Elisa Pong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> American composer John Cage (1912-1992) is an influential figure in musical, visual and performing arts after World War II and has also been claimed as a forerunner of the western avant-garde in the artistic field. However, the crucial factors contributed to his highly acclaimed achievements include the Zen enlightenment, which he mainly got from Japanese Zen master D. T. Suzuki (1870-1966). As a kind of reflection and afterthought of the Zen inspiration, John Cage created various forms of arts in which visual arts have recently attracted more and more attention and discussion, especially from the perspectives of Zen. John Cage had started to create visual art works since he was 66 years old and the activity had lasted until his death. The quality and quantity of the works are worthy of in-depth study— the 667 pieces of print, 114 pieces of water color, and about 150 pieces of sketch. Cage’s stylistic changes during the 14 years of creation are quite obvious, and the Zen elements in the later works seem to be omnipresent. Based on comparative artistic study, a historical and conceptual view of Zen art that was formed initially in the traditional Chinese and Japanese visual arts will be discussed. Then, Chinese and Japanese representative Zen works will be mentioned, and the technique aspect, as well as stylistic analysis, will be revealed. Finally, a comprehensive comparison of the original Oriental Zen works with John Cage’s works and focus on the influence, and art transformation will be addressed. The master pieces from Zen tradition by Chinese artists like Liang Kai (d. 1210) and Ma Yuan (1160-1225) from Southern Sung Dynasty, the Japanese artists like Sesshū (1420-1506), Miyamoto Musashi (1584-1645) and some others would be discussed. In the current study, these art works from different periods of historical development in Zen will serve as the basis of analogy, interpretation, and criticism to Cage's visual art works. Through the perspectives of the Zen authenticity from Asia, we see how John Cage appropriated the eastern culture to his innovation, which changed the art world forever. And it is believed that through a transition from inter-, cross-, toward trans-cultural inspiration, John Cage set up a unique pathway of art innovations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=John%20Cage" title="John Cage">John Cage</a>, <a href="https://publications.waset.org/abstracts/search?q=Chinese%20Zen%20art" title=" Chinese Zen art"> Chinese Zen art</a>, <a href="https://publications.waset.org/abstracts/search?q=Japanese%20Zen%20art" title=" Japanese Zen art"> Japanese Zen art</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20art" title=" visual art"> visual art</a> </p> <a href="https://publications.waset.org/abstracts/78742/an-east-west-trans-cultural-study-zen-enlightenment-in-asian-and-john-cages-visual-arts" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78742.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">524</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2823</span> Association of Sensory Processing and Cognitive Deficits in Children with Autism Spectrum Disorders – Pioneer Study in Saudi Arabia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rana%20Zeina">Rana Zeina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objective: The association between Sensory problems and cognitive abilities has been studied in individuals with Autism Spectrum Disorders (ASDs). In this study, we used a neuropsychological test to evaluate memory and attention in ASDs children with sensory problems compared to the ASDs children without sensory problems. Methods: Four visual memory tests of Cambridge Neuropsychological Test Automated Battery (CANTAB) including Big/Little Circle (BLC), Simple Reaction Time (SRT), Intra/Extra Dimensional Set Shift (IED), Spatial Recognition Memory (SRM), were administered to 14 ASDs children with sensory problems compared to 13 ASDs without sensory problems aged 3 to 12 with IQ of above 70. Results: ASDs Individuals with sensory problems performed worse than the ASDs group without sensory problems on comprehension, learning, reversal and simple reaction time tasks, and no significant difference between the two groups was recorded in terms of the visual memory and visual comprehension tasks. Conclusion: The findings of this study suggest that ASDs children with sensory problems are facing deficits in learning, comprehension, reversal, and speed of response to stimuli. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20memory" title="visual memory">visual memory</a>, <a href="https://publications.waset.org/abstracts/search?q=attention" title=" attention"> attention</a>, <a href="https://publications.waset.org/abstracts/search?q=autism%20spectrum%20disorders" title=" autism spectrum disorders"> autism spectrum disorders</a>, <a href="https://publications.waset.org/abstracts/search?q=CANTAB%20eclipse" title=" CANTAB eclipse"> CANTAB eclipse</a> </p> <a href="https://publications.waset.org/abstracts/6386/association-of-sensory-processing-and-cognitive-deficits-in-children-with-autism-spectrum-disorders-pioneer-study-in-saudi-arabia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6386.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">451</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2822</span> Masquerade and “What Comes Behind Six Is More Than Seven”: Thoughts on Art History and Visual Culture Research Methods</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Osa%20D%20Egonwa">Osa D Egonwa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the 21<sup>st</sup> century, the disciplinary boundaries of past centuries that we often create through mainstream art historical classification, techniques and sources may have been eroded by visual culture, which seems to provide a more inclusive umbrella for the new ways artists go about the creative process and its resultant commodities. Over the past four decades, artists in Africa have resorted to new materials, techniques and themes which have affected our ways of research on these artists and their art. Frontline artists such as El Anatsui, Yinka Shonibare, Erasmus Onyishi are demonstrating that any material is just suitable for artistic expression. Most of times, these materials come with their own techniques/effects and visual syntax: a combination of materials compounds techniques, formal aesthetic indexes, halo effects, and iconography. This tends to challenge the categories and we lean on to view, think and talk about them. This renders our main stream art historical research methods inadequate, thus suggesting new discursive concepts, terms and theories. This paper proposed the Africanist eclectic methods derived from the dual framework of Masquerade Theory and What Comes Behind Six is More Than Seven. This paper shares thoughts/research on art historical methods, terminological re-alignments on classification/source data, presentational format and interpretation arising from the emergent trends in our subject. The outcome provides useful tools to mediate new thoughts and experiences in recent African art and visual culture. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=art%20historical%20methods" title="art historical methods">art historical methods</a>, <a href="https://publications.waset.org/abstracts/search?q=classifications" title=" classifications"> classifications</a>, <a href="https://publications.waset.org/abstracts/search?q=concepts" title=" concepts"> concepts</a>, <a href="https://publications.waset.org/abstracts/search?q=re-alignment" title=" re-alignment"> re-alignment</a> </p> <a href="https://publications.waset.org/abstracts/107114/masquerade-and-what-comes-behind-six-is-more-than-seven-thoughts-on-art-history-and-visual-culture-research-methods" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/107114.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">110</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2821</span> &quot;If It Bleeds It Leads” the Visual Witnessing Trauma Phenomenon among Journalists: An Analysis of Various Media Images from East Africa</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lydia%20Ouma%20Radoli">Lydia Ouma Radoli</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paradox of documenting history through visuals that objectify gruesome images to depict the prominence of stories intrigues media researchers. In East Africa, the topic has been captured in a variety of media frames, but scantly in scholarly work. This paper adopts Visual Rhetoric and Framing Theories to tease out the drivers behind the criteria for the selection of violent visuals. The paper projects that quantitative and qualitative literature regarding journalists’ personal and work-related exposure to PSTD will give insights into the concept of trauma journalism - reporting of horrific events, e.g., violent crime and terror. The data will be collected through methods such as document analysis (photographs and videos) and in-depth interviews to summarize the informational contents with respect to the research objectives and questions. The study is hinged on the background that the criterion for news production is constructed from the idea that ‘if there’s violence, conflict, and death involved, the story gets top priority.’ The anticipated outcome is to establish trauma experiences of visual rhetors, suggest mitigations, and address gaps in academic research. The findings of the study will sustain the critical role of visual rhetors. Further, media practitioners may find the study useful in assessing the effects and values of visual witnessing. Historically, the criterion for visual news production has been that if there’s violence, conflict, and death involved, the story gets top priority. To capture the goriness of the images, media theorists and sociologists have used the expression: “If it bleeds, it leads.” The statement assumes that audiences are attracted to pictures that show violent images. Further, research on visual aspects of Television news has shown its ability to hold viewers’ attention and cause aggression. This paper samples images and narratives from Journalists who have covered trauma-related events. The samples are indicative of the problem under study, which depicts journalists exposed to traumatic events as not receiving any Psycho-social support within newsrooms. It is hoped that the study could inform policy and practice within developing countries through the interpretations of theoretical and empirical explanations of existing trauma phenomena among journalists. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual-witnessing" title="visual-witnessing">visual-witnessing</a>, <a href="https://publications.waset.org/abstracts/search?q=media%20culture" title=" media culture"> media culture</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20rhetoric" title=" visual rhetoric"> visual rhetoric</a>, <a href="https://publications.waset.org/abstracts/search?q=imaging%20violence%20in%20East%20Africa" title=" imaging violence in East Africa"> imaging violence in East Africa</a> </p> <a href="https://publications.waset.org/abstracts/161378/if-it-bleeds-it-leads-the-visual-witnessing-trauma-phenomenon-among-journalists-an-analysis-of-various-media-images-from-east-africa" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161378.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">121</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2820</span> Fashion through Senses: A Study of the Impact of Sensory Cues on the Consumption of Fashion Accessories by Female Shoppers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vaishali%20Joshi">Vaishali Joshi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: A literature gap exists on the concept of sensory marketing elements, such as tactile elements, auditory elements, visual elements, and olfactory elements, studied together in the context of retailing. An investigation is required to study the impact of these sensory cues together on consumer behaviour. So, this study will undertake the impact of sensory marketing in fashion accessories stores on female shoppers’ purchasing activities. The present research study highlights the role of sensory cues, such as tactile cues, visual cues, auditory cues, and olfactory cues, on the shopper’s emotional states and their purchase intention. Design/methodology/approach: The emotional states and the purchase intention of the female shoppers influenced by the visual, tactile, olfactory, and auditory cues present in the fashion accessories stores were measured. The mall intercept technique was used for the data collection. Data analysis was done through Structural Equation Modelling. Research limitations/implications: The restricted geographical range and limited sample size of the study had a substantial poor influence on the wide usage of the study’s outcome. Also, here, the sample was female respondents only. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sensory%20marketing" title="sensory marketing">sensory marketing</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20cues" title=" visual cues"> visual cues</a>, <a href="https://publications.waset.org/abstracts/search?q=olfactory%20cues" title=" olfactory cues"> olfactory cues</a>, <a href="https://publications.waset.org/abstracts/search?q=tactile%20cues" title=" tactile cues"> tactile cues</a>, <a href="https://publications.waset.org/abstracts/search?q=auditory%20cues" title=" auditory cues"> auditory cues</a> </p> <a href="https://publications.waset.org/abstracts/174064/fashion-through-senses-a-study-of-the-impact-of-sensory-cues-on-the-consumption-of-fashion-accessories-by-female-shoppers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174064.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">86</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2819</span> An Investigation on Smartphone-Based Machine Vision System for Inspection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=They%20Shao%20Peng">They Shao Peng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Machine vision system for inspection is an automated technology that is normally utilized to analyze items on the production line for quality control purposes, it also can be known as an automated visual inspection (AVI) system. By applying automated visual inspection, the existence of items, defects, contaminants, flaws, and other irregularities in manufactured products can be easily detected in a short time and accurately. However, AVI systems are still inflexible and expensive due to their uniqueness for a specific task and consuming a lot of set-up time and space. With the rapid development of mobile devices, smartphones can be an alternative device for the visual system to solve the existing problems of AVI. Since the smartphone-based AVI system is still at a nascent stage, this led to the motivation to investigate the smartphone-based AVI system. This study is aimed to provide a low-cost AVI system with high efficiency and flexibility. In this project, the object detection models, which are You Only Look Once (YOLO) model and Single Shot MultiBox Detector (SSD) model, are trained, evaluated, and integrated with the smartphone and webcam devices. The performance of the smartphone-based AVI is compared with the webcam-based AVI according to the precision and inference time in this study. Additionally, a mobile application is developed which allows users to implement real-time object detection and object detection from image storage. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=automated%20visual%20inspection" title="automated visual inspection">automated visual inspection</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision" title=" machine vision"> machine vision</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20application" title=" mobile application"> mobile application</a> </p> <a href="https://publications.waset.org/abstracts/151908/an-investigation-on-smartphone-based-machine-vision-system-for-inspection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151908.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">124</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2818</span> Illumina MiSeq Sequencing for Bacteria Identification on Audio-Visual Materials</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tereza%20Brany%C5%A1ov%C3%A1">Tereza Branyšová</a>, <a href="https://publications.waset.org/abstracts/search?q=Martina%20Kra%C4%8Dmarov%C3%A1"> Martina Kračmarová</a>, <a href="https://publications.waset.org/abstracts/search?q=Kate%C5%99ina%20Demnerov%C3%A1"> Kateřina Demnerová</a>, <a href="https://publications.waset.org/abstracts/search?q=Michal%20%C4%8Eurovi%C4%8D"> Michal Ďurovič</a>, <a href="https://publications.waset.org/abstracts/search?q=Hana%20Stiborov%C3%A1"> Hana Stiborová</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Microbial deterioration threatens all objects of cultural heritage, including audio-visual materials. Fungi are commonly known to be the main factor in audio-visual material deterioration. However, although being neglected, bacteria also play a significant role. In addition to microbial contamination of materials, it is also essential to analyse air as a possible contamination source. This work aims to identify bacterial species in the archives of the Czech Republic that occur on audio-visual materials as well as in the air in the archives. For sampling purposes, the smears from the materials were taken by sterile polyurethane sponges, and the air was collected using a MAS-100 aeroscope. Metagenomic DNA from all collected samples was immediately isolated and stored at -20 °C. DNA library for the 16S rRNA gene was prepared using two-step PCR and specific primers and the concentration step was included due to meagre yields of the DNA. After that, the samples were sent to the University of Fairbanks, Alaska, for Illumina MiSeq sequencing. Subsequently, the analysis of the sequences was conducted in R software. The obtained sequences were assigned to the corresponding bacterial species using the DADA2 package. The impact of air contamination and the impact of different photosensitive layers that audio-visual materials were made of, such as gelatine, albumen, and collodion, were evaluated. As a next step, we will take a deeper focus on air contamination. We will select an appropriate culture-dependent approach along with a culture-independent approach to observe a metabolically active species in the air. Acknowledgment: This project is supported by grant no. DG18P02OVV062 of the Ministry of Culture of the Czech Republic. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cultural%20heritage" title="cultural heritage">cultural heritage</a>, <a href="https://publications.waset.org/abstracts/search?q=Illumina%20MiSeq" title=" Illumina MiSeq"> Illumina MiSeq</a>, <a href="https://publications.waset.org/abstracts/search?q=metagenomics" title=" metagenomics"> metagenomics</a>, <a href="https://publications.waset.org/abstracts/search?q=microbial%20identification" title=" microbial identification"> microbial identification</a> </p> <a href="https://publications.waset.org/abstracts/136677/illumina-miseq-sequencing-for-bacteria-identification-on-audio-visual-materials" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136677.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">156</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2817</span> Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ksheeraj%20Sai%20Vepuri">Ksheeraj Sai Vepuri</a>, <a href="https://publications.waset.org/abstracts/search?q=Nada%20Attar"> Nada Attar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20expression%20recognittion" title="facial expression recognittion">facial expression recognittion</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20preprocessing" title=" image preprocessing"> image preprocessing</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=CNN" title=" CNN"> CNN</a> </p> <a href="https://publications.waset.org/abstracts/130679/improving-the-performance-of-deep-learning-in-facial-emotion-recognition-with-image-sharpening" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130679.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">143</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2816</span> The Incidental Linguistic Information Processing and Its Relation to General Intellectual Abilities</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Evgeniya%20V.%20Gavrilova">Evgeniya V. Gavrilova</a>, <a href="https://publications.waset.org/abstracts/search?q=Sofya%20S.%20Belova"> Sofya S. Belova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present study was aimed at clarifying the relationship between general intellectual abilities and efficiency in free recall and rhymed words generation task after incidental exposure to linguistic stimuli. The theoretical frameworks stress that general intellectual abilities are based on intentional mental strategies. In this context, it seems to be crucial to examine the efficiency of incidentally presented information processing in cognitive task and its relation to general intellectual abilities. The sample consisted of 32 Russian students. Participants were exposed to pairs of words. Each pair consisted of two common nouns or two city names. Participants had to decide whether a city name was presented in each pair. Thus words’ semantics was processed intentionally. The city names were considered to be focal stimuli, whereas common nouns were considered to be peripheral stimuli. Along with that each pair of words could be rhymed or not be rhymed, but this phonemic aspect of stimuli’s characteristic (rhymed and non-rhymed words) was processed incidentally. Then participants were asked to produce as many rhymes as they could to new words. The stimuli presented earlier could be used as well. After that, participants had to retrieve all words presented earlier. In the end, verbal and non-verbal abilities were measured with number of special psychometric tests. As for free recall task intentionally processed focal stimuli had an advantage in recall compared to peripheral stimuli. In addition all the rhymed stimuli were recalled more effectively than non-rhymed ones. The inverse effect was found in words generation task where participants tended to use mainly peripheral stimuli compared to focal ones. Furthermore peripheral rhymed stimuli were most popular target category of stimuli that was used in this task. Thus the information that was processed incidentally had a supplemental influence on efficiency of stimuli processing as well in free recall as in word generation task. Different patterns of correlations between intellectual abilities and efficiency in different stimuli processing in both tasks were revealed. Non-verbal reasoning ability correlated positively with free recall of peripheral rhymed stimuli, but it was not related to performance on rhymed words’ generation task. Verbal reasoning ability correlated positively with free recall of focal stimuli. As for rhymed words generation task, verbal intelligence correlated negatively with generation of focal stimuli and correlated positively with generation of all peripheral stimuli. The present findings lead to two key conclusions. First, incidentally processed stimuli had an advantage in free recall and word generation task. Thus incidental information processing appeared to be crucial for subsequent cognitive performance. Secondly, it was demonstrated that incidentally processed stimuli were recalled more frequently by participants with high nonverbal reasoning ability and were more effectively used by participants with high verbal reasoning ability in subsequent cognitive tasks. That implies that general intellectual abilities could benefit from operating by different levels of information processing while cognitive problem solving. This research was supported by the “Grant of President of RF for young PhD scientists” (contract № is 14.Z56.17.2980- MK) and the Grant № 15-36-01348a2 of Russian Foundation for Humanities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=focal%20and%20peripheral%20stimuli" title="focal and peripheral stimuli">focal and peripheral stimuli</a>, <a href="https://publications.waset.org/abstracts/search?q=general%20intellectual%20abilities" title=" general intellectual abilities"> general intellectual abilities</a>, <a href="https://publications.waset.org/abstracts/search?q=incidental%20information%20processing" title=" incidental information processing"> incidental information processing</a> </p> <a href="https://publications.waset.org/abstracts/70096/the-incidental-linguistic-information-processing-and-its-relation-to-general-intellectual-abilities" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70096.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">231</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2815</span> Generating Real-Time Visual Summaries from Located Sensor-Based Data with Chorems </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Z.%20Bouattou">Z. Bouattou</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Laurini"> R. Laurini</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20Belbachir"> H. Belbachir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper describes a new approach for the automatic generation of the visual summaries dealing with cartographic visualization methods and sensors real time data modeling. Hence, the concept of chorems seems an interesting candidate to visualize real time geographic database summaries. Chorems have been defined by Roger Brunet (1980) as schematized visual representations of territories. However, the time information is not yet handled in existing chorematic map approaches, issue has been discussed in this paper. Our approach is based on spatial analysis by interpolating the values recorded at the same time, by sensors available, so we have a number of distributed observations on study areas and used spatial interpolation methods to find the concentration fields, from these fields and by using some spatial data mining procedures on the fly, it is possible to extract important patterns as geographic rules. Then, those patterns are visualized as chorems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=geovisualization" title="geovisualization">geovisualization</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20analytics" title=" spatial analytics"> spatial analytics</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time" title=" real-time"> real-time</a>, <a href="https://publications.waset.org/abstracts/search?q=geographic%20data%20streams" title=" geographic data streams"> geographic data streams</a>, <a href="https://publications.waset.org/abstracts/search?q=sensors" title=" sensors"> sensors</a>, <a href="https://publications.waset.org/abstracts/search?q=chorems" title=" chorems"> chorems</a> </p> <a href="https://publications.waset.org/abstracts/30697/generating-real-time-visual-summaries-from-located-sensor-based-data-with-chorems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30697.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">401</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2814</span> Odor-Color Association Stroop-Task and the Importance of an Odorant in an Odor-Imagery Task </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jonathan%20Ham">Jonathan Ham</a>, <a href="https://publications.waset.org/abstracts/search?q=Christopher%20Koch"> Christopher Koch</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There are consistently observed associations between certain odors and colors, and there is an association between the ability to imagine vivid visual objects and imagine vivid odors. However, little has been done to investigate how the associations between odors and visual information effect visual processes. This study seeks to understand the relationship between odor imaging, color associations, and visual attention by utilizing a Stroop-task based on common odor-color associations. This Stroop-task was designed using three fruits with distinct odors that are associated with the color of the fruit: lime with green, strawberry with red, and lemon with yellow. Each possible word-color combination was presented in the experimental trials. When the word matched the associated color (lime written in green) it was considered congruent; if it did not, it was considered incongruent (lime written in red or yellow). In experiment I (n = 34) participants were asked to both imagine the odor of the fruit on the screen and identify which fruit it was, and each word-color combination was presented 20 times (a total of 180 trials, with 60 congruent and 120 incongruent instances). Response time and error rate of the participant responses were recorded. There was no significant difference in either measure between the congruent and incongruent trials. In experiment II participants (n = 18) followed the identical procedure as in the previous experiment with the addition of an odorant in the room. The odorant (orange) was not the fruit or color used in the experimental trials. With a fruit-based odorant in the room, the response times (measured in milliseconds) between congruent and incongruent trials were significantly different, with incongruent trials (M = 755.919, SD = 239.854) having significantly longer response times than congruent trials (M = 690.626, SD = 198.822), t (1, 17) = 4.154, p < 0.01. This suggests that odor imagery does affect visual attention to colors, and the ability to inhibit odor-color associations; however, odor imagery is difficult and appears to be facilitated in the presence of a related odorant. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=odor-color%20associations" title="odor-color associations">odor-color associations</a>, <a href="https://publications.waset.org/abstracts/search?q=odor%20imagery" title=" odor imagery"> odor imagery</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20attention" title=" visual attention"> visual attention</a>, <a href="https://publications.waset.org/abstracts/search?q=inhibition" title=" inhibition"> inhibition</a> </p> <a href="https://publications.waset.org/abstracts/102195/odor-color-association-stroop-task-and-the-importance-of-an-odorant-in-an-odor-imagery-task" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/102195.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">176</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2813</span> Image Multi-Feature Analysis by Principal Component Analysis for Visual Surface Roughness Measurement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei%20Zhang">Wei Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20He"> Yan He</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20Wang"> Yan Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yufeng%20Li"> Yufeng Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Chuanpeng%20Hao"> Chuanpeng Hao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Surface roughness is an important index for evaluating surface quality, needs to be accurately measured to ensure the performance of the workpiece. The roughness measurement based on machine vision involves various image features, some of which are redundant. These redundant features affect the accuracy and speed of the visual approach. Previous research used correlation analysis methods to select the appropriate features. However, this feature analysis is independent and cannot fully utilize the information of data. Besides, blindly reducing features lose a lot of useful information, resulting in unreliable results. Therefore, the focus of this paper is on providing a redundant feature removal approach for visual roughness measurement. In this paper, the statistical methods and gray-level co-occurrence matrix(GLCM) are employed to extract the texture features of machined images effectively. Then, the principal component analysis(PCA) is used to fuse all extracted features into a new one, which reduces the feature dimension and maintains the integrity of the original information. Finally, the relationship between new features and roughness is established by the support vector machine(SVM). The experimental results show that the approach can effectively solve multi-feature information redundancy of machined surface images and provides a new idea for the visual evaluation of surface roughness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20analysis" title="feature analysis">feature analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision" title=" machine vision"> machine vision</a>, <a href="https://publications.waset.org/abstracts/search?q=PCA" title=" PCA"> PCA</a>, <a href="https://publications.waset.org/abstracts/search?q=surface%20roughness" title=" surface roughness"> surface roughness</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a> </p> <a href="https://publications.waset.org/abstracts/138525/image-multi-feature-analysis-by-principal-component-analysis-for-visual-surface-roughness-measurement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138525.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">212</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2812</span> Memory Based Reinforcement Learning with Transformers for Long Horizon Timescales and Continuous Action Spaces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shweta%20Singh">Shweta Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Sudaman%20Katti"> Sudaman Katti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most well-known sequence models make use of complex recurrent neural networks in an encoder-decoder configuration. The model used in this research makes use of a transformer, which is based purely on a self-attention mechanism, without relying on recurrence at all. More specifically, encoders and decoders which make use of self-attention and operate based on a memory, are used. In this research work, results for various 3D visual and non-visual reinforcement learning tasks designed in Unity software were obtained. Convolutional neural networks, more specifically, nature CNN architecture, are used for input processing in visual tasks, and comparison with standard long short-term memory (LSTM) architecture is performed for both visual tasks based on CNNs and non-visual tasks based on coordinate inputs. This research work combines the transformer architecture with the proximal policy optimization technique used popularly in reinforcement learning for stability and better policy updates while training, especially for continuous action spaces, which are used in this research work. Certain tasks in this paper are long horizon tasks that carry on for a longer duration and require extensive use of memory-based functionalities like storage of experiences and choosing appropriate actions based on recall. The transformer, which makes use of memory and self-attention mechanism in an encoder-decoder configuration proved to have better performance when compared to LSTM in terms of exploration and rewards achieved. Such memory based architectures can be used extensively in the field of cognitive robotics and reinforcement learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=reinforcement%20learning" title=" reinforcement learning"> reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=self-attention" title=" self-attention"> self-attention</a>, <a href="https://publications.waset.org/abstracts/search?q=transformers" title=" transformers"> transformers</a>, <a href="https://publications.waset.org/abstracts/search?q=unity" title=" unity"> unity</a> </p> <a href="https://publications.waset.org/abstracts/163301/memory-based-reinforcement-learning-with-transformers-for-long-horizon-timescales-and-continuous-action-spaces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163301.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2811</span> Challenging Weak Central Coherence: An Exploration of Neurological Evidence from Visual Processing and Linguistic Studies in Autism Spectrum Disorder</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jessica%20Scher%20Lisa">Jessica Scher Lisa</a>, <a href="https://publications.waset.org/abstracts/search?q=Eric%20Shyman"> Eric Shyman</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Autism spectrum disorder (ASD) is a neuro-developmental disorder that is characterized by persistent deficits in social communication and social interaction (i.e. deficits in social-emotional reciprocity, nonverbal communicative behaviors, and establishing/maintaining social relationships), as well as by the presence of repetitive behaviors and perseverative areas of interest (i.e. stereotyped or receptive motor movements, use of objects, or speech, rigidity, restricted interests, and hypo or hyperactivity to sensory input or unusual interest in sensory aspects of the environment). Additionally, diagnoses of ASD require the presentation of symptoms in the early developmental period, marked impairments in adaptive functioning, and a lack of explanation by general intellectual impairment or global developmental delay (although these conditions may be co-occurring). Over the past several decades, many theories have been developed in an effort to explain the root cause of ASD in terms of atypical central cognitive processes. The field of neuroscience is increasingly finding structural and functional differences between autistic and neurotypical individuals using neuro-imaging technology. One main area this research has focused upon is in visuospatial processing, with specific attention to the notion of ‘weak central coherence’ (WCC). This paper offers an analysis of findings from selected studies in order to explore research that challenges the ‘deficit’ characterization of a weak central coherence theory as opposed to a ‘superiority’ characterization of strong local coherence. The weak central coherence theory has long been both supported and refuted in the ASD literature and has most recently been increasingly challenged by advances in neuroscience. The selected studies lend evidence to the notion of amplified localized perception rather than deficient global perception. In other words, WCC may represent superiority in ‘local processing’ rather than a deficit in global processing. Additionally, the right hemisphere and the specific area of the extrastriate appear to be key in both the visual and lexicosemantic process. Overactivity in the striate region seems to suggest inaccuracy in semantic language, which lends itself to support for the link between the striate region and the atypical organization of the lexicosemantic system in ASD. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autism%20spectrum%20disorder" title="autism spectrum disorder">autism spectrum disorder</a>, <a href="https://publications.waset.org/abstracts/search?q=neurology" title=" neurology"> neurology</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20processing" title=" visual processing"> visual processing</a>, <a href="https://publications.waset.org/abstracts/search?q=weak%20coherence" title=" weak coherence"> weak coherence</a> </p> <a href="https://publications.waset.org/abstracts/115591/challenging-weak-central-coherence-an-exploration-of-neurological-evidence-from-visual-processing-and-linguistic-studies-in-autism-spectrum-disorder" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/115591.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2810</span> Virtual and Visual Reconstructions in Museum Expositions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ekaterina%20Razuvalova">Ekaterina Razuvalova</a>, <a href="https://publications.waset.org/abstracts/search?q=Konstantin%20Rudenko"> Konstantin Rudenko</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this article the most successful examples of international visual and virtual reconstructions of historical and culture objects, which are based on informative and communicative technologies, are represented. 3D reconstructions can demonstrate outward appearance, visualize different hypothesis, connected to represented object. Virtual reality can give us any daytime and season, any century and environment. We can see how different people from different countries and different era lived; we can get different information about any object; we can see historical complexes in real city environment, which are damaged or vanished. These innovations confirm the fact, that 3D reconstruction is important in museum development. Considering the most interesting examples of visual and virtual reconstructions, we can notice, that visual reconstruction is a 3D image of different objects, historical complexes, buildings and phenomena. They are constant and we can see them only as momentary objects. And virtual reconstruction is some environment with its own time, rules and phenomena. These reconstructions are continuous; seasons, daytime and natural conditions can change there. They can demonstrate abilities of virtual world existence. In conclusion: new technologies give us opportunities to expand the boundaries of museum space, improve abilities of museum expositions, create emotional atmosphere of game immersion, which can interest visitor. Usage of network sources allows increasing the number of visitors and virtual reconstruction opportunities show creative side of museum business. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20technologies" title="computer technologies">computer technologies</a>, <a href="https://publications.waset.org/abstracts/search?q=historical%20reconstruction" title=" historical reconstruction"> historical reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=museums" title=" museums"> museums</a>, <a href="https://publications.waset.org/abstracts/search?q=museum%20expositions" title=" museum expositions"> museum expositions</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20reconstruction" title=" virtual reconstruction"> virtual reconstruction</a> </p> <a href="https://publications.waset.org/abstracts/39522/virtual-and-visual-reconstructions-in-museum-expositions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/39522.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">329</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2809</span> Causes of Blindness and Low Vision among Visually Impaired Population Supported by Welfare Organization in Ardabil Province in Iran</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammad%20Maeiyat">Mohammad Maeiyat</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Maeiyat%20Ivatlou"> Ali Maeiyat Ivatlou</a>, <a href="https://publications.waset.org/abstracts/search?q=Rasul%20Fani%20Khiavi"> Rasul Fani Khiavi</a>, <a href="https://publications.waset.org/abstracts/search?q=Abouzar%20Maeiyat%20Ivatlou"> Abouzar Maeiyat Ivatlou</a>, <a href="https://publications.waset.org/abstracts/search?q=Parya%20Maeiyat"> Parya Maeiyat</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: Considering the fact that visual impairment is still one of the countries health problem, this study was conducted to determine the causes of blindness and low vision in visually impaired membership of Ardabil Province welfare organization. Methods: The present study which was based on descriptive and national-census, that carried out in visually impaired population supported by welfare organization in all urban and rural areas of Ardabil Province in 2013 and Collection of samples lasted for 7 months. The subjects were inspected by optometrist to determine their visual status (blindness or low vision) and then referred to ophthalmologist in order to discover the main causes of visual impairment based on the international classification of diseases version 10. Statistical analysis of collected data was performed using SPSS software version 18. Results: Overall, 403 subjects with mean age of years participated in this study. 73.2% were blind, 26.8 % were low vision and according gender grouping 60.50 % of them were male, 39.50 % were female that divided into three groups with the age level of lower than 15 (11.2%) 15 to 49 (76.7%), and 50 and higher (12.1%). The age range was 1 to 78 years. The causes of blindness and low vision were in descending order: optic atrophy (18.4%), retinitis pigmentosa (16.8%), corneal diseases (12.4%), chorioretinal diseases (9.4%), cataract (8.9%), glaucoma (8.2%), phthisis bulbi (7.2%), degenerative myopia (6.9%), microphtalmos ( 4%), amblyopia (3.2%), albinism (2.5%) and nistagmus (2%). Conclusion: in this study the main causes of visual impairments were optic atrophy and retinitis pigmentosa, thus specific prevention plans can be effective in reducing the incidence of visual disabilities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=blindness" title="blindness">blindness</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20vision" title=" low vision"> low vision</a>, <a href="https://publications.waset.org/abstracts/search?q=welfare" title=" welfare"> welfare</a>, <a href="https://publications.waset.org/abstracts/search?q=ardabil" title=" ardabil"> ardabil</a> </p> <a href="https://publications.waset.org/abstracts/24741/causes-of-blindness-and-low-vision-among-visually-impaired-population-supported-by-welfare-organization-in-ardabil-province-in-iran" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24741.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">440</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2808</span> Evaluation of Football Forecasting Models: 2021 Brazilian Championship Case Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Flavio%20Cordeiro%20Fontanella">Flavio Cordeiro Fontanella</a>, <a href="https://publications.waset.org/abstracts/search?q=Asla%20Medeiros%20e%20S%C3%A1"> Asla Medeiros e Sá</a>, <a href="https://publications.waset.org/abstracts/search?q=Moacyr%20Alvim%20Horta%20Barbosa%20da%20Silva"> Moacyr Alvim Horta Barbosa da Silva</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the present work, we analyse the performance of football results forecasting models. In order to do so, we have performed the data collection from eight different forecasting models during the 2021 Brazilian football season. First, we guide the analysis through visual representations of the data, designed to highlight the most prominent features and enhance the interpretation of differences and similarities between the models. We propose using a 2-simplex triangle to investigate visual patterns from the results forecasting models. Next, we compute the expected points for every team playing in the championship and compare them to the final league standings, revealing interesting contrasts between actual to expected performances. Then, we evaluate forecasts’ accuracy using the Ranked Probability Score (RPS); models comparison accounts for tiny scale differences that may become consistent in time. Finally, we observe that the Wisdom of Crowds principle can be appropriately applied in the context, driving into a discussion of results forecasts usage in practice. This paper’s primary goal is to encourage football forecasts’ performance discussion. We hope to accomplish it by presenting appropriate criteria and easy-to-understand visual representations that can point out the relevant factors of the subject. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accuracy%20evaluation" title="accuracy evaluation">accuracy evaluation</a>, <a href="https://publications.waset.org/abstracts/search?q=Brazilian%20championship" title=" Brazilian championship"> Brazilian championship</a>, <a href="https://publications.waset.org/abstracts/search?q=football%20results%20forecasts" title=" football results forecasts"> football results forecasts</a>, <a href="https://publications.waset.org/abstracts/search?q=forecasting%20models" title=" forecasting models"> forecasting models</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20analysis" title=" visual analysis"> visual analysis</a> </p> <a href="https://publications.waset.org/abstracts/146056/evaluation-of-football-forecasting-models-2021-brazilian-championship-case-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146056.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <ul class="pagination"> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=9" rel="prev">&lsaquo;</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=1">1</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=2">2</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=9">9</a></li> <li class="page-item active"><span class="page-link">10</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=11">11</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=12">12</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=13">13</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=103">103</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=104">104</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=11" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10