CINXE.COM

Search results for: Bag of Visual Words (BOVW)

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: Bag of Visual Words (BOVW)</title> <meta name="description" content="Search results for: Bag of Visual Words (BOVW)"> <meta name="keywords" content="Bag of Visual Words (BOVW)"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="Bag of Visual Words (BOVW)" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="Bag of Visual Words (BOVW)"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 3106</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: Bag of Visual Words (BOVW)</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3106</span> Synthetic Aperture Radar Remote Sensing Classification Using the Bag of Visual Words Model to Land Cover Studies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Reza%20Mohammadi">Reza Mohammadi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahmod%20R.%20Sahebi"> Mahmod R. Sahebi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mehrnoosh%20Omati"> Mehrnoosh Omati</a>, <a href="https://publications.waset.org/abstracts/search?q=Milad%20Vahidi"> Milad Vahidi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Classification of high resolution polarimetric Synthetic Aperture Radar (PolSAR) images plays an important role in land cover and land use management. Recently, classification algorithms based on Bag of Visual Words (BOVW) model have attracted significant interest among scholars and researchers in and out of the field of remote sensing. In this paper, BOVW model with pixel based low-level features has been implemented to classify a subset of San Francisco bay PolSAR image, acquired by RADARSAR 2 in C-band. We have used segment-based decision-making strategy and compared the result with the result of traditional Support Vector Machine (SVM) classifier. 90.95% overall accuracy of the classification with the proposed algorithm has shown that the proposed algorithm is comparable with the state-of-the-art methods. In addition to increase in the classification accuracy, the proposed method has decreased undesirable speckle effect of SAR images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29" title="Bag of Visual Words (BOVW)">Bag of Visual Words (BOVW)</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=land%20cover%20management" title=" land cover management"> land cover management</a>, <a href="https://publications.waset.org/abstracts/search?q=Polarimetric%20Synthetic%20Aperture%20Radar%20%28PolSAR%29" title=" Polarimetric Synthetic Aperture Radar (PolSAR)"> Polarimetric Synthetic Aperture Radar (PolSAR)</a> </p> <a href="https://publications.waset.org/abstracts/95344/synthetic-aperture-radar-remote-sensing-classification-using-the-bag-of-visual-words-model-to-land-cover-studies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95344.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">209</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3105</span> Bag of Local Features for Person Re-Identification on Large-Scale Datasets</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yixiu%20Liu">Yixiu Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yunzhou%20Zhang"> Yunzhou Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jianning%20Chi"> Jianning Chi</a>, <a href="https://publications.waset.org/abstracts/search?q=Hao%20Chu"> Hao Chu</a>, <a href="https://publications.waset.org/abstracts/search?q=Rui%20Zheng"> Rui Zheng</a>, <a href="https://publications.waset.org/abstracts/search?q=Libo%20Sun"> Libo Sun</a>, <a href="https://publications.waset.org/abstracts/search?q=Guanghao%20Chen"> Guanghao Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Fangtong%20Zhou"> Fangtong Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the last few years, large-scale person re-identification has attracted a lot of attention from video surveillance since it has a potential application prospect in public safety management. However, it is still a challenging job considering the variation in human pose, the changing illumination conditions and the lack of paired samples. Although the accuracy has been significantly improved, the data dependence of the sample training is serious. To tackle this problem, a new strategy is proposed based on bag of visual words (BoVW) model of designing the feature representation which has been widely used in the field of image retrieval. The local features are extracted, and more discriminative feature representation is obtained by cross-view dictionary learning (CDL), then the assignment map is obtained through k-means clustering. Finally, the BoVW histograms are formed which encodes the images with the statistics of the feature classes in the assignment map. Experiments conducted on the CUHK03, Market1501 and MARS datasets show that the proposed method performs favorably against existing approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bag%20of%20visual%20words" title="bag of visual words">bag of visual words</a>, <a href="https://publications.waset.org/abstracts/search?q=cross-view%20dictionary%20learning" title=" cross-view dictionary learning"> cross-view dictionary learning</a>, <a href="https://publications.waset.org/abstracts/search?q=person%20re-identification" title=" person re-identification"> person re-identification</a>, <a href="https://publications.waset.org/abstracts/search?q=reranking" title=" reranking"> reranking</a> </p> <a href="https://publications.waset.org/abstracts/85908/bag-of-local-features-for-person-re-identification-on-large-scale-datasets" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85908.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">195</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3104</span> Bag of Words Representation Based on Weighting Useful Visual Words</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatma%20Abdedayem">Fatma Abdedayem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most effective and efficient methods in image categorization are almost based on bag-of-words (BOW) which presents image by a histogram of occurrence of visual words. In this paper, we propose a novel extension to this method. Firstly, we extract features in multi-scales by applying a color local descriptor named opponent-SIFT. Secondly, in order to represent image we use Spatial Pyramid Representation (SPR) and an extension to the BOW method which based on weighting visual words. Typically, the visual words are weighted during histogram assignment by computing the ratio of their occurrences in the image to the occurrences in the background. Finally, according to classical BOW retrieval framework, only a few words of the vocabulary is useful for image representation. Therefore, we select the useful weighted visual words that respect the threshold value. Experimentally, the algorithm is tested by using different image classes of PASCAL VOC 2007 and is compared against the classical bag-of-visual-words algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=BOW" title="BOW">BOW</a>, <a href="https://publications.waset.org/abstracts/search?q=useful%20visual%20words" title=" useful visual words"> useful visual words</a>, <a href="https://publications.waset.org/abstracts/search?q=weighted%20visual%20words" title=" weighted visual words"> weighted visual words</a>, <a href="https://publications.waset.org/abstracts/search?q=bag%20of%20visual%20words" title=" bag of visual words"> bag of visual words</a> </p> <a href="https://publications.waset.org/abstracts/14009/bag-of-words-representation-based-on-weighting-useful-visual-words" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14009.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">436</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3103</span> The Analogy of Visual Arts and Visual Literacy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lindelwa%20Pepu">Lindelwa Pepu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual Arts and Visual Literacy are defined with distinction from one another. Visual Arts are known for art forms such as drawing, painting, and photography, just to name a few. At the same time, Visual Literacy is known for learning through images. The Visual Literacy phenomenon may be attributed to the use of images was first established for creating memories and enjoyment. As time evolved, images became the center and essential means of making contact between people. Gradually, images became a means for interpreting and understanding words through visuals, that being Visual Arts. The purpose of this study is to present the analogy of the two terms Visual Arts and Visual Literacy, which are defined and compared through early practicing visual artists as well as relevant researchers to reveal how they interrelate with one another. This is a qualitative study that uses an interpretive approach as it seeks to understand and explain the interest of the study. The results reveal correspondence of the analogy between the two terms through various writers of early and recent years. This study recommends the significance of the two terms and the role they play in relation to other fields of study. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20arts" title="visual arts">visual arts</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20literacy" title=" visual literacy"> visual literacy</a>, <a href="https://publications.waset.org/abstracts/search?q=pictures" title=" pictures"> pictures</a>, <a href="https://publications.waset.org/abstracts/search?q=images" title=" images"> images</a> </p> <a href="https://publications.waset.org/abstracts/165940/the-analogy-of-visual-arts-and-visual-literacy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165940.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3102</span> To Estimate the Association between Visual Stress and Visual Perceptual Skills</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vijay%20Reena%20Durai">Vijay Reena Durai</a>, <a href="https://publications.waset.org/abstracts/search?q=Krithica%20Srinivasan"> Krithica Srinivasan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: The two fundamental skills involved in the growth and wellbeing of any child can be categorized into visual motor and perceptual skills. Visual stress is a disorder which is characterized by visual discomfort, blurred vision, misspelling words, skipping lines, letters bunching together. There is a need to understand the deficits in perceptual skills among children with visual stress. Aim: To estimate the association between visual stress and visual perceptual skills Objective: To compare visual perceptual skills of children with and without visual stress Methodology: Children between 8 to 15 years of age participated in this cross-sectional study. All children with monocular visual acuity better than or equal to 6/6 were included. Visual perceptual skills were measured using test for visual perceptual skills (TVPS) tool. Reading speed was measured with the chosen colored overlay using Wilkins reading chart and pattern glare score was estimated using a 3cpd gratings. Visual stress was defined as change in reading speed of greater than or equal to 10% and a pattern glare score of greater than or equal to 4. Results: 252 children participated in this study and the male: female ratio of 3:2. Majority of the children preferred Magenta (28%) and Yellow (25%) colored overlay for reading. There was a significant difference between the two groups (MD=1.24±0.6) (p<0.04, 95% CI 0.01-2.43) only in the sequential memory skills. The prevalence of visual stress in this group was found to be 31% (n=78). Binary logistic regression showed that odds ratio of having poor visual perceptual skills was OR: 2.85 (95% CI 1.08-7.49) among children with visual stress. Conclusion: Children with visual stress are found to have three times poorer visual perceptual skills than children without visual stress. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20stress" title="visual stress">visual stress</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perceptual%20skills" title=" visual perceptual skills"> visual perceptual skills</a>, <a href="https://publications.waset.org/abstracts/search?q=colored%20overlay" title=" colored overlay"> colored overlay</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20glare" title=" pattern glare"> pattern glare</a> </p> <a href="https://publications.waset.org/abstracts/41580/to-estimate-the-association-between-visual-stress-and-visual-perceptual-skills" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">388</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3101</span> The Effect of Visual Access to Greenspace and Urban Space on a False Memory Learning Task</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bryony%20Pound">Bryony Pound</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigated how views of green or urban space affect learning performance. It provides evidence of the value of visual access to greenspace in work and learning environments, and builds on the extensive research into the cognitive and learning-related benefits of access to green and natural spaces, particularly in learning environments. It demonstrates that benefits of visual access to natural spaces whilst learning can produce statistically significant faster responses than those facing urban views after only 5 minutes. The primary hypothesis of this research was that a greenspace view would improve short-term learning. Participants were randomly assigned to either a view of parkland or of urban buildings from the same room. They completed a psychological test of two stages. The first stage consisted of a presentation of words from eight different categories (four manmade and four natural). Following this a 2.5 minute break was given; participants were not prompted to look out of the window, but all were observed doing so. The second stage of the test involved a word recognition/false memory test of three types. Type 1 was presented words from each category; Type 2 was non-presented words from those same categories; and Type 3 was non-presented words from different categories. Participants were asked to respond with whether they thought they had seen the words before or not. Accuracy of responses and reaction times were recorded. The key finding was that reaction times for Type 2 words (highest difficulty) were significantly different between urban and green view conditions. Those with an urban view had slower reaction times for these words, so a view of greenspace resulted in better information retrieval for word and false memory recognition. Importantly, this difference was found after only 5 minutes of exposure to either view, during winter, and with a sample size of only 26. Greenspace views improve performance in a learning task. This provides a case for better visual access to greenspace in work and learning environments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=benefits" title="benefits">benefits</a>, <a href="https://publications.waset.org/abstracts/search?q=greenspace" title=" greenspace"> greenspace</a>, <a href="https://publications.waset.org/abstracts/search?q=learning" title=" learning"> learning</a>, <a href="https://publications.waset.org/abstracts/search?q=restoration" title=" restoration"> restoration</a> </p> <a href="https://publications.waset.org/abstracts/85294/the-effect-of-visual-access-to-greenspace-and-urban-space-on-a-false-memory-learning-task" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85294.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3100</span> The Contemporary Visual Spectacle: Critical Visual Literacy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lai-Fen%20Yang">Lai-Fen Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this increasingly visual world, how can we best decipher and understand the many ways that our everyday lives are organized around looking practices and the many images we encounter each day? Indeed, how we interact with and interpret visual images is a basic component of human life. Today, however, we are living in one of the most artificial visual and image-saturated cultures in human history, which makes understanding the complex construction and multiple social functions of visual imagery more important than ever before. Themes regarding our experience of a visually pervasive mediated culture, here, termed visual spectacle. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20culture" title="visual culture">visual culture</a>, <a href="https://publications.waset.org/abstracts/search?q=contemporary" title=" contemporary"> contemporary</a>, <a href="https://publications.waset.org/abstracts/search?q=images" title=" images"> images</a>, <a href="https://publications.waset.org/abstracts/search?q=literacy" title=" literacy"> literacy</a> </p> <a href="https://publications.waset.org/abstracts/9045/the-contemporary-visual-spectacle-critical-visual-literacy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9045.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">513</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3099</span> Upside Down Words as Initial Clinical Presentation of an Underlying Acute Ischemic Stroke</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ramuel%20Spirituel%20Mattathiah%20A.%20San%20Juan">Ramuel Spirituel Mattathiah A. San Juan</a>, <a href="https://publications.waset.org/abstracts/search?q=Neil%20Ambasing"> Neil Ambasing</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: Reversal of vision metamorphopsia is a transient form of metamorphopsia described as an upside-down alteration of the visual field in the coronal plane. Patients would describe objects, such as cups, upside down, but the tea would not spill, and people would walk on their heads. It is extremely rare as a stable finding, lasting days or weeks. We report a case wherein this type of metamorphopsia occurred only in written words and lasted for six months. Objective: To the best of our knowledge, we report the first rare occurrence of reversal of vision metamorphopsia described as inverted words as the sole initial presentation of an underlying stroke. Case Presentation: We report a 59-year-old male with poorly controlled hypertension and diabetes mellitus who presented with a 3-day history of difficulty reading, described as the words were turned upside down as if the words were inverted horizontally then with the progression of deficits such as right homonymous hemianopia and achromatopsia, prosopagnosia. Cranial magnetic resonance imaging (MRI) revealed an acute infarct on the left posterior cerebral artery territory. Follow-up after six months revealed improvement of the visual field cut but with the persistence of the higher cortical function deficits. Conclusion: We report the first rare occurrence of metamorphopsia described as purely inverted words as the sole initial presentation of an underlying stroke. The differential diagnoses of a patient presenting with text reversal metamorphopsia should include stroke in the occipitotemporal areas. It further expands the landscape of metamorphopsias due to its exclusivity to written words and prolonged duration. Knowing these clinical features will help identify the lesion locus and improve subsequent stroke care, especially in time-bound management like intravenous thrombolysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=rare%20presentation" title="rare presentation">rare presentation</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20reversal%20metamorphopsia" title=" text reversal metamorphopsia"> text reversal metamorphopsia</a>, <a href="https://publications.waset.org/abstracts/search?q=ischemic%20stroke" title=" ischemic stroke"> ischemic stroke</a>, <a href="https://publications.waset.org/abstracts/search?q=stroke" title=" stroke"> stroke</a> </p> <a href="https://publications.waset.org/abstracts/169453/upside-down-words-as-initial-clinical-presentation-of-an-underlying-acute-ischemic-stroke" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169453.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">59</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3098</span> The Development of Chinese-English Homophonic Word Pairs Databases for English Teaching and Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yuh-Jen%20Wu">Yuh-Jen Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chun-Min%20Lin"> Chun-Min Lin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Homophonic words are common in Mandarin Chinese which belongs to the tonal language family. Using homophonic cues to study foreign languages is one of the learning techniques of mnemonics that can aid the retention and retrieval of information in the human memory. When learning difficult foreign words, some learners transpose them with words in a language they are familiar with to build an association and strengthen working memory. These phonological clues are beneficial means for novice language learners. In the classroom, if mnemonic skills are used at the appropriate time in the instructional sequence, it may achieve their maximum effectiveness. For Chinese-speaking students, proper use of Chinese-English homophonic word pairs may help them learn difficult vocabulary. In this study, a database program is developed by employing Visual Basic. The database contains two corpora, one with Chinese lexical items and the other with English ones. The Chinese corpus contains 59,053 Chinese words that were collected by a web crawler. The pronunciations of this group of words are compared with words in an English corpus based on WordNet, a lexical database for the English language. Words in both databases with similar pronunciation chunks and batches are detected. A total of approximately 1,000 Chinese lexical items are located in the preliminary comparison. These homophonic word pairs can serve as a valuable tool to assist Chinese-speaking students in learning and memorizing new English vocabulary. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chinese" title="Chinese">Chinese</a>, <a href="https://publications.waset.org/abstracts/search?q=corpus" title=" corpus"> corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=English" title=" English"> English</a>, <a href="https://publications.waset.org/abstracts/search?q=homophonic%20words" title=" homophonic words"> homophonic words</a>, <a href="https://publications.waset.org/abstracts/search?q=vocabulary" title=" vocabulary"> vocabulary</a> </p> <a href="https://publications.waset.org/abstracts/99745/the-development-of-chinese-english-homophonic-word-pairs-databases-for-english-teaching-and-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/99745.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">182</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3097</span> Applications of Visual Ethnography in Public Anthropology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Subramaniam%20Panneerselvam">Subramaniam Panneerselvam</a>, <a href="https://publications.waset.org/abstracts/search?q=Gunanithi%20Perumal"> Gunanithi Perumal</a>, <a href="https://publications.waset.org/abstracts/search?q=KP%20Subin"> KP Subin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Visual Ethnography is used to document the culture of a community through a visual means. It could be either photography or audio-visual documentation. The visual ethnographic techniques are widely used in visual anthropology. The visual anthropologists use the camera to capture the cultural image of the studied community. There is a scope for subjectivity while the culture is documented by an external person. But the upcoming of the public anthropology provides an opportunity for the participants to document their own culture. There is a need to equip the participants with the skill of doing visual ethnography. The mobile phone technology provides visual documentation facility to everyone to capture the moments instantly. The visual ethnography facilitates the multiple-interpretation for the audiences. This study explores the effectiveness of visual ethnography among the tribal youth through public anthropology perspective. The case study was conducted to equip the tribal youth of Nilgiris in visual ethnography and the outcome of the experiment shared in this paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20ethnography" title="visual ethnography">visual ethnography</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20anthropology" title=" visual anthropology"> visual anthropology</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20anthropology" title=" public anthropology"> public anthropology</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple-interpretation" title=" multiple-interpretation"> multiple-interpretation</a>, <a href="https://publications.waset.org/abstracts/search?q=case%20study" title=" case study"> case study</a> </p> <a href="https://publications.waset.org/abstracts/127577/applications-of-visual-ethnography-in-public-anthropology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127577.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">183</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3096</span> Visual Identity Components of Tourist Destination</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Petra%20Barisic">Petra Barisic</a>, <a href="https://publications.waset.org/abstracts/search?q=Zrinka%20Blazevic"> Zrinka Blazevic</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the world of modern communications, visual identity has predominant influence on the overall success of tourist destinations, but despite of these, the problem of designing thriving tourist destination visual identity and their components are hardly addressed. This study highlights the importance of building and managing the visual identity of tourist destination, and based on the empirical study of well-known Mediterranean destination of Croatia analyses three main components of tourist destination visual identity; name, slogan, and logo. Moreover, the paper shows how respondents perceive each component of Croatia’s visual identity. According to study, logo is the most important, followed by the name and slogan. Research also reveals that Croatian economy lags behind developed countries in understanding the importance of visual identity, and its influence on marketing goal achievements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=components%20of%20visual%20identity" title="components of visual identity">components of visual identity</a>, <a href="https://publications.waset.org/abstracts/search?q=Croatia" title=" Croatia"> Croatia</a>, <a href="https://publications.waset.org/abstracts/search?q=tourist%20destination" title=" tourist destination"> tourist destination</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20identity" title=" visual identity "> visual identity </a> </p> <a href="https://publications.waset.org/abstracts/6602/visual-identity-components-of-tourist-destination" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6602.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1050</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3095</span> Visual Thing Recognition with Binary Scale-Invariant Feature Transform and Support Vector Machine Classifiers Using Color Information</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei-Jong%20Yang">Wei-Jong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei-Hau%20Du"> Wei-Hau Du</a>, <a href="https://publications.waset.org/abstracts/search?q=Pau-Choo%20Chang"> Pau-Choo Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jar-Ferr%20Yang"> Jar-Ferr Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Pi-Hsia%20Hung"> Pi-Hsia Hung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The demands of smart visual thing recognition in various devices have been increased rapidly for daily smart production, living and learning systems in recent years. This paper proposed a visual thing recognition system, which combines binary scale-invariant feature transform (SIFT), bag of words model (BoW), and support vector machine (SVM) by using color information. Since the traditional SIFT features and SVM classifiers only use the gray information, color information is still an important feature for visual thing recognition. With color-based SIFT features and SVM, we can discard unreliable matching pairs and increase the robustness of matching tasks. The experimental results show that the proposed object recognition system with color-assistant SIFT SVM classifier achieves higher recognition rate than that with the traditional gray SIFT and SVM classification in various situations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20moments" title="color moments">color moments</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20thing%20recognition%20system" title=" visual thing recognition system"> visual thing recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20SIFT" title=" color SIFT"> color SIFT</a> </p> <a href="https://publications.waset.org/abstracts/62857/visual-thing-recognition-with-binary-scale-invariant-feature-transform-and-support-vector-machine-classifiers-using-color-information" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62857.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">467</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3094</span> 3D Text Toys: Creative Approach to Experiential and Immersive Learning for World Literacy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Azyz%20Sharafy">Azyz Sharafy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> 3D Text Toys is an innovative and creative approach that utilizes 3D text objects to enhance creativity, literacy, and basic learning in an enjoyable and gamified manner. By using 3D Text Toys, children can develop their creativity, visually learn words and texts, and apply their artistic talents within their creative abilities. This process incorporates haptic engagement with 2D and 3D texts, word building, and mechanical construction of everyday objects, thereby facilitating better word and text retention. The concept involves constructing visual objects made entirely out of 3D text/words, where each component of the object represents a word or text element. For instance, a bird can be recreated using words or text shaped like its wings, beak, legs, head, and body, resulting in a 3D representation of the bird purely composed of text. This can serve as an art piece or a learning tool in the form of a 3D text toy. These 3D text objects or toys can be crafted using natural materials such as leaves, twigs, strings, or ropes, or they can be made from various physical materials using traditional crafting tools. Digital versions of these objects can be created using 2D or 3D software on devices like phones, laptops, iPads, or computers. To transform digital designs into physical objects, computerized machines such as CNC routers, laser cutters, and 3D printers can be utilized. Once the parts are printed or cut out, students can assemble the 3D texts by gluing them together, resulting in natural or everyday 3D text objects. These objects can be painted to create artistic pieces or text toys, and the addition of wheels can transform them into moving toys. One of the significant advantages of this visual and creative object-based learning process is that students not only learn words but also derive enjoyment from the process of creating, painting, and playing with these objects. The ownership and creation process further enhances comprehension and word retention. Moreover, for individuals with learning disabilities such as dyslexia, ADD (Attention Deficit Disorder), or other learning difficulties, the visual and haptic approach of 3D Text Toys can serve as an additional creative and personalized learning aid. The application of 3D Text Toys extends to both the English language and any other global written language. The adaptation and creative application may vary depending on the country, space, and native written language. Furthermore, the implementation of this visual and haptic learning tool can be tailored to teach foreign languages based on age level and comprehension requirements. In summary, this creative, haptic, and visual approach has the potential to serve as a global literacy tool. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20text%20toys" title="3D text toys">3D text toys</a>, <a href="https://publications.waset.org/abstracts/search?q=creative" title=" creative"> creative</a>, <a href="https://publications.waset.org/abstracts/search?q=artistic" title=" artistic"> artistic</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20learning%20for%20world%20literacy" title=" visual learning for world literacy"> visual learning for world literacy</a> </p> <a href="https://publications.waset.org/abstracts/166053/3d-text-toys-creative-approach-to-experiential-and-immersive-learning-for-world-literacy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166053.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">64</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3093</span> The Facilitatory Effect of Phonological Priming on Visual Word Recognition in Arabic as a Function of Lexicality and Overlap Positions</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ali%20Al%20Moussaoui">Ali Al Moussaoui</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An experiment was designed to assess the performance of 24 Lebanese adults (mean age 29:5 years) in a lexical decision making (LDM) task to find out how the facilitatory effect of phonological priming (PP) affects the speed of visual word recognition in Arabic as lexicality (wordhood) and phonological overlap positions (POP) vary. The experiment falls in line with previous research on phonological priming in the light of the cohort theory and in relation to visual word recognition. The experiment also departs from the research on the Arabic language in which the importance of the consonantal root as a distinct morphological unit is confirmed. Based on previous research, it is hypothesized that (1) PP has a facilitating effect in LDM with words but not with nonwords and (2) final phonological overlap between the prime and the target is more facilitatory than initial overlap. An LDM task was programmed on PsychoPy application. Participants had to decide if a target (e.g., bayn ‘between’) preceded by a prime (e.g., bayt ‘house’) is a word or not. There were 4 conditions: no PP (NP), nonwords priming nonwords (NN), nonwords priming words (NW), and words priming words (WW). The conditions were simultaneously controlled for word length, wordhood, and POP. The interstimulus interval was 700 ms. Within the PP conditions, POP was controlled for in which there were 3 overlap positions between the primes and the targets: initial (e.g., asad ‘lion’ and asaf ‘sorrow’), final (e.g., kattab ‘cause to write’ 2sg-mas and rattab ‘organize’ 2sg-mas), or two-segmented (e.g., namle ‘ant’ and naħle ‘bee’). There were 96 trials, 24 in each condition, using a within-subject design. The results show that concerning (1), the highest average reaction time (RT) is that in NN, followed firstly by NW and finally by WW. There is statistical significance only between the pairs NN-NW and NN-WW. Regarding (2), the shortest RT is that in the two-segmented overlap condition, followed by the final POP in the first place and the initial POP in the last place. The difference between the two-segmented and the initial overlap is significant, while other pairwise comparisons are not. Based on these results, PP emerges as a facilitatory phenomenon that is highly sensitive to lexicality and POP. While PP can have a facilitating effect under lexicality, it shows no facilitation in its absence, which intersects with several previous findings. Participants are found to be more sensitive to the final phonological overlap than the initial overlap, which also coincides with a body of earlier literature. The results contradict the cohort theory’s stress on the onset overlap position and, instead, give more weight to final overlap, and even heavier weight to the two-segmented one. In conclusion, this study confirms the facilitating effect of PP with words but not when stimuli (at least the primes and at most both the primes and targets) are nonwords. It also shows that the two-segmented priming is the most influential in LDM in Arabic. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lexicality" title="lexicality">lexicality</a>, <a href="https://publications.waset.org/abstracts/search?q=phonological%20overlap%20positions" title=" phonological overlap positions"> phonological overlap positions</a>, <a href="https://publications.waset.org/abstracts/search?q=phonological%20priming" title=" phonological priming"> phonological priming</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20word%20recognition" title=" visual word recognition"> visual word recognition</a> </p> <a href="https://publications.waset.org/abstracts/97923/the-facilitatory-effect-of-phonological-priming-on-visual-word-recognition-in-arabic-as-a-function-of-lexicality-and-overlap-positions" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/97923.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">185</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3092</span> The Repetition of New Words and Information in Mandarin-Speaking Children: A Corpus-Based Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jian-Jun%20Gao">Jian-Jun Gao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Repetition is used for a variety of functions in conversation. When young children first learn to speak, they often repeat words from the adult’s recent utterance with the learning and social function. The objective of this study was to ascertain whether the repetitions are equivalent in indicating attention to new words and the initial repeat of information in conversation. Based on the observation of naturally occurring language use in Taiwan Corpus of Child Mandarin (TCCM), the results in this study provided empirical support to the previous findings that children are more likely to repeat new words they are offered than to repeat new information. When children get older, there would be a drop in the repetition of both new words and new information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=acquisition" title="acquisition">acquisition</a>, <a href="https://publications.waset.org/abstracts/search?q=corpus" title=" corpus"> corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=mandarin" title=" mandarin"> mandarin</a>, <a href="https://publications.waset.org/abstracts/search?q=new%20words" title=" new words"> new words</a>, <a href="https://publications.waset.org/abstracts/search?q=new%20information" title=" new information"> new information</a>, <a href="https://publications.waset.org/abstracts/search?q=repetition" title=" repetition"> repetition</a> </p> <a href="https://publications.waset.org/abstracts/106580/the-repetition-of-new-words-and-information-in-mandarin-speaking-children-a-corpus-based-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/106580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">149</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3091</span> Bag of Words Representation Based on Fusing Two Color Local Descriptors and Building Multiple Dictionaries </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatma%20Abdedayem">Fatma Abdedayem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We propose an extension to the famous method called Bag of words (BOW) which proved a successful role in the field of image categorization. Practically, this method based on representing image with visual words. In this work, firstly, we extract features from images using Spatial Pyramid Representation (SPR) and two dissimilar color descriptors which are opponent-SIFT and transformed-color-SIFT. Secondly, we fuse color local features by joining the two histograms coming from these descriptors. Thirdly, after collecting of all features, we generate multi-dictionaries coming from n random feature subsets that obtained by dividing all features into n random groups. Then, by using these dictionaries separately each image can be represented by n histograms which are lately concatenated horizontally and form the final histogram, that allows to combine Multiple Dictionaries (MDBoW). In the final step, in order to classify image we have applied Support Vector Machine (SVM) on the generated histograms. Experimentally, we have used two dissimilar image datasets in order to test our proposition: Caltech 256 and PASCAL VOC 2007. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bag%20of%20words%20%28BOW%29" title="bag of words (BOW)">bag of words (BOW)</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20descriptors" title=" color descriptors"> color descriptors</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-dictionaries" title=" multi-dictionaries"> multi-dictionaries</a>, <a href="https://publications.waset.org/abstracts/search?q=MDBoW" title=" MDBoW"> MDBoW</a> </p> <a href="https://publications.waset.org/abstracts/14637/bag-of-words-representation-based-on-fusing-two-color-local-descriptors-and-building-multiple-dictionaries" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14637.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">297</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3090</span> A Word-to-Vector Formulation for Word Representation </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sandra%20Rizkallah">Sandra Rizkallah</a>, <a href="https://publications.waset.org/abstracts/search?q=Amir%20F.%20Atiya"> Amir F. Atiya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work presents a novel word to vector representation that is based on embedding the words into a sphere, whereby the dot product of the corresponding vectors represents the similarity between any two words. Embedding the vectors into a sphere enabled us to take into consideration the antonymity between words, not only the synonymity, because of the suitability to handle the polarity nature of words. For example, a word and its antonym can be represented as a vector and its negative. Moreover, we have managed to extract an adequate vocabulary. The obtained results show that the proposed approach can capture the essence of the language, and can be generalized to estimate a correct similarity of any new pair of words. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title="natural language processing">natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=word%20to%20vector" title=" word to vector"> word to vector</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20similarity" title=" text similarity"> text similarity</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20mining" title=" text mining"> text mining</a> </p> <a href="https://publications.waset.org/abstracts/81808/a-word-to-vector-formulation-for-word-representation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81808.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">275</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3089</span> The Involvement of Visual and Verbal Representations Within a Quantitative and Qualitative Visual Change Detection Paradigm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Laura%20Jenkins">Laura Jenkins</a>, <a href="https://publications.waset.org/abstracts/search?q=Tim%20Eschle"> Tim Eschle</a>, <a href="https://publications.waset.org/abstracts/search?q=Joanne%20Ciafone"> Joanne Ciafone</a>, <a href="https://publications.waset.org/abstracts/search?q=Colin%20Hamilton"> Colin Hamilton</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An original working memory model suggested the separation of visual and verbal systems in working memory architecture, in which only visual working memory components were used during visual working memory tasks. It was later suggested that the visuo spatial sketch pad was the only memory component at use during visual working memory tasks, and components such as the phonological loop were not considered. In more recent years, a contrasting approach has been developed with the use of an executive resource to incorporate both visual and verbal representations in visual working memory paradigms. This was supported using research demonstrating the use of verbal representations and an executive resource in a visual matrix patterns task. The aim of the current research is to investigate the working memory architecture during both a quantitative and a qualitative visual working memory task. A dual task method will be used. Three secondary tasks will be used which are designed to hit specific components within the working memory architecture – Dynamic Visual Noise (visual components), Visual Attention (spatial components) and Verbal Attention (verbal components). A comparison of the visual working memory tasks will be made to discover if verbal representations are at use, as the previous literature suggested. This direct comparison has not been made so far in the literature. Considerations will be made as to whether a domain specific approach should be employed when discussing visual working memory tasks, or whether a more domain general approach could be used instead. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=semantic%20organisation" title="semantic organisation">semantic organisation</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20memory" title=" visual memory"> visual memory</a>, <a href="https://publications.waset.org/abstracts/search?q=change%20detection" title=" change detection"> change detection</a> </p> <a href="https://publications.waset.org/abstracts/22696/the-involvement-of-visual-and-verbal-representations-within-a-quantitative-and-qualitative-visual-change-detection-paradigm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22696.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">595</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3088</span> Morphological Rules of Bangla Repetition Words for UNL Based Machine Translation </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nawab%20Yousuf%20Ali">Nawab Yousuf Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Golam"> S. Golam</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Ameer"> A. Ameer</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashok%20Toru%20Roy"> Ashok Toru Roy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper develops new morphological rules suitable for Bangla repetition words to be incorporated into an inter lingua representation called Universal Networking Language (UNL). The proposed rules are to be used to combine verb roots and their inflexions to produce words which are then combined with other similar types of words to generate repetition words. This paper outlines the format of morphological rules for different types of repetition words that come from verb roots based on the framework of UNL provided by the UNL centre of the Universal Networking Digital Language (UNDL) foundation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Universal%20Networking%20Language%20%28UNL%29" title="Universal Networking Language (UNL)">Universal Networking Language (UNL)</a>, <a href="https://publications.waset.org/abstracts/search?q=universal%20word%20%28UW%29" title=" universal word (UW)"> universal word (UW)</a>, <a href="https://publications.waset.org/abstracts/search?q=head%20word%20%28HW%29" title=" head word (HW)"> head word (HW)</a>, <a href="https://publications.waset.org/abstracts/search?q=Bangla-UNL%20Dictionary" title=" Bangla-UNL Dictionary"> Bangla-UNL Dictionary</a>, <a href="https://publications.waset.org/abstracts/search?q=morphological%20rule" title=" morphological rule"> morphological rule</a>, <a href="https://publications.waset.org/abstracts/search?q=enconverter%20%28EnCo%29" title=" enconverter (EnCo)"> enconverter (EnCo)</a> </p> <a href="https://publications.waset.org/abstracts/18524/morphological-rules-of-bangla-repetition-words-for-unl-based-machine-translation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18524.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">310</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3087</span> Exploring the Visual Representations of Neon Signs and Its Vernacular Tacit Knowledge of Neon Making</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Brian%20Kwok">Brian Kwok</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Hong Kong is well-known for its name as "the Pearl of the Orient", due to its spectacular night-view with vast amount of decorative neon lights on the streets. Neon signs are first used as the pervasive media of communication for all kinds of commercial advertising, ranging from movie theatres to nightclubs and department stores, and later appropriated by artists as medium of artwork. As a well-established visual language, it displays texts in bilingual format due to British's colonial influence, which are sometimes arranged in an opposite reading order. Research on neon signs as a visual representation is rare but significant because they are part of people’s collective memories of the unique cityscapes which associate the shifting values of people's daily lives and culture identity. Nevertheless, with the current policy to remove abandoned neon signs, their total number dramatically declines recently. The Buildings Department found an estimation of 120,000 unauthorized signboards (including neon signs) in Hong Kong in 2013, and the removal of such is at a rate of estimated 1,600 per year since 2006. In other words, the vernacular cultural values and historical continuity of neon signs will gradually be vanished if no immediate action is taken in documenting them for the purpose of research and cultural preservation. Therefore, the Hong Kong Neon Signs Archive project was established in June of 2015, and over 100 neon signs are photo-documented so far. By content analysis, this project will explore the two components of neon signs – the use of visual languages and vernacular tacit knowledge of neon makers. It attempts to answer these questions about Hong Kong's neon signs: 'What are the ways in which visual representations are used to produce our cityscapes and streetscapes?'; 'What are the visual languages and conventions of usage in different business types?'; 'What the intact knowledge are applied when producing these visual forms of neon signs?' <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cityscapes" title="cityscapes">cityscapes</a>, <a href="https://publications.waset.org/abstracts/search?q=neon%20signs" title=" neon signs"> neon signs</a>, <a href="https://publications.waset.org/abstracts/search?q=tacit%20knowledge" title=" tacit knowledge"> tacit knowledge</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20representation" title=" visual representation "> visual representation </a> </p> <a href="https://publications.waset.org/abstracts/43147/exploring-the-visual-representations-of-neon-signs-and-its-vernacular-tacit-knowledge-of-neon-making" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43147.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">301</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3086</span> Determining the Number of Words Required to Fulfil the Writing Task in an English Proficiency Exam with the Raters’ Scores</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Defne%20Akinci%20Midas">Defne Akinci Midas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this study was to determine the minimum, and maximum number of words that would be sufficient to fulfill the writing task in the local English Proficiency Exam (EPE) produced and administered at the Middle East Technical University, Ankara, Turkey. The relationship between the number of words and the scores of the written products that had been awarded by two raters in three online EPEs administered in 2020 was examined. The means, standard deviations, percentages, range, minimum and maximum scores as well as correlations of the scores awarded to written products with the words that amount to 0-50, 51-100, 101-150, 151-200, 201-250, 251-300, and so on were computed. The results showed that the raters did not award a full score to texts that had fewer than 100 words. Moreover, the texts that had around 200 words were awarded the highest scores. The highest number of words that earned the highest scores was about 225, and from then onwards, the scores were either stable or lower. A positive low to moderate correlation was found between the number of words and scores awarded to the texts. We understand that the idea of ‘the longer, the better’ did not apply here. The results also showed that words between 101 to about 225 were sufficient to fulfill the writing task to fully display writing skills and language ability in the specific case of this exam. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=English%20proficiency%20exam" title="English proficiency exam">English proficiency exam</a>, <a href="https://publications.waset.org/abstracts/search?q=number%20of%20words" title=" number of words"> number of words</a>, <a href="https://publications.waset.org/abstracts/search?q=scoring" title=" scoring"> scoring</a>, <a href="https://publications.waset.org/abstracts/search?q=writing%20task" title=" writing task"> writing task</a> </p> <a href="https://publications.waset.org/abstracts/136178/determining-the-number-of-words-required-to-fulfil-the-writing-task-in-an-english-proficiency-exam-with-the-raters-scores" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/136178.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">175</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3085</span> Authentic Visual Resources for the Foreign Language Classroom</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=O.%20Yeret">O. Yeret</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual resources are all around us, especially in today's media-driven world, which gravitates, more and more, towards the visual. As a result, authentic resources, such as television advertisements, become testaments – authentic cultural materials – that reflect the landscape of certain groups and communities during a specific point in time. Engaging language students with popular advertisements can provide a great opportunity for developing cultural awareness, a component that is sometimes overlooked in the foreign language classroom. This paper will showcase practical examples of using Israeli Television Ads in various Modern Hebrew language courses. Several approaches for combining the study of language and culture, through the use of advertisements, will be included; for example, targeted assignments based on students' proficiency levels, such as: asking to recognize vocabulary words and answer basic information questions, as opposed to commenting on the significance of an ad and analyzing its particular cultural elements. The use of visual resources in the language classroom does not only enable students to learn more about the culture of the target language, but also to combine their language skills. Most often, interacting with an ad requires close listening and some reading (through captions or other data). As students analyze the ad, they employ their writing and speaking skills by answering questions in text or audio form. Hence, these interactions are able to elicit complex language use across the four domains: listening, speaking, writing, and reading. This paper will include examples of practical assignments that were developed for several Modern Hebrew language courses, together with the specific advertisements and questions related to them. Conclusions from the process and recent feedback notes received from students regarding the use of visual resources will be mentioned as well. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=authentic%20materials" title="authentic materials">authentic materials</a>, <a href="https://publications.waset.org/abstracts/search?q=cultural%20awareness" title=" cultural awareness"> cultural awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=second%20language%20acquisition" title=" second language acquisition"> second language acquisition</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20resources" title=" visual resources"> visual resources</a> </p> <a href="https://publications.waset.org/abstracts/124438/authentic-visual-resources-for-the-foreign-language-classroom" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/124438.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">108</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3084</span> The Importance of Visual Communication in Artificial Intelligence</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manjitsingh%20Rajput">Manjitsingh Rajput</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual communication plays an important role in artificial intelligence (AI) because it enables machines to understand and interpret visual information, similar to how humans do. This abstract explores the importance of visual communication in AI and emphasizes the importance of various applications such as computer vision, object emphasis recognition, image classification and autonomous systems. In going deeper, with deep learning techniques and neural networks that modify visual understanding, In addition to AI programming, the abstract discusses challenges facing visual interfaces for AI, such as data scarcity, domain optimization, and interpretability. Visual communication and other approaches, such as natural language processing and speech recognition, have also been explored. Overall, this abstract highlights the critical role that visual communication plays in advancing AI capabilities and enabling machines to perceive and understand the world around them. The abstract also explores the integration of visual communication with other modalities like natural language processing and speech recognition, emphasizing the critical role of visual communication in AI capabilities. This methodology explores the importance of visual communication in AI development and implementation, highlighting its potential to enhance the effectiveness and accessibility of AI systems. It provides a comprehensive approach to integrating visual elements into AI systems, making them more user-friendly and efficient. In conclusion, Visual communication is crucial in AI systems for object recognition, facial analysis, and augmented reality, but challenges like data quality, interpretability, and ethics must be addressed. Visual communication enhances user experience, decision-making, accessibility, and collaboration. Developers can integrate visual elements for efficient and accessible AI systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20communication%20AI" title="visual communication AI">visual communication AI</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20aid%20in%20communication" title=" visual aid in communication"> visual aid in communication</a>, <a href="https://publications.waset.org/abstracts/search?q=essence%20of%20visual%20communication." title=" essence of visual communication."> essence of visual communication.</a> </p> <a href="https://publications.waset.org/abstracts/174998/the-importance-of-visual-communication-in-artificial-intelligence" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174998.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3083</span> A Comparison of Anger State and Trait Anger Among Adolescents with and without Visual Impairment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sehmus%20Aslan">Sehmus Aslan</a>, <a href="https://publications.waset.org/abstracts/search?q=Sibel%20Karacaoglu"> Sibel Karacaoglu</a>, <a href="https://publications.waset.org/abstracts/search?q=Cengiz%20Sevgin"> Cengiz Sevgin</a>, <a href="https://publications.waset.org/abstracts/search?q=Ummuhan%20Bas%20Aslan"> Ummuhan Bas Aslan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objective: Anger expression style is an important moderator of the effects on the person and person’s environment. Anger and anger expression have become important constructs in identifying individuals at high risk for psychological difficulties. To our knowledge, there is no information about anger and anger expression of adolescents with visual impairment. The aim of this study was to compare anger and anger expression among adolescents with and without visual impairment. Methods: Thirty-eight adolescents with visual impairment (18 female, 20 male) and 44 adolescents without visual impairment (22 female, 24 male), in totally 84 adolescents aged between 12 to 15 years, participated in the study. Anger and anger expression of the participants assessed with The State-Trait Anger Scale (STAS). STAS, a self-report questionnaire, is designed to measure the experience and expression of anger. STAS has four subtitles including continuous anger, anger in, anger out and anger control. Reliability and validity of the STAS have been well established among adolescents. Mann-Whitney U Test was used for statistical analysis. Results: No significant differences were found in the scores of continuous anger and anger out between adolescents with and without visual impairment (p < 0.05). On the other hand, there were differences in scores of anger control and anger in between adolescents with and without visual impairment (p>0.05). The score of anger control in adolescents with visual impairment were higher compared with adolescents without visual impairment. Meanwhile, the adolescents with visual impairment had lower score for anger in compared with adolescents without visual impairment. Conclusions: The results of this study suggest that there is no difference in anger level among adolescents with and without visual impairment meanwhile there is difference in anger expression. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adolescent" title="adolescent">adolescent</a>, <a href="https://publications.waset.org/abstracts/search?q=anger" title=" anger"> anger</a>, <a href="https://publications.waset.org/abstracts/search?q=impaired" title=" impaired"> impaired</a>, <a href="https://publications.waset.org/abstracts/search?q=visual" title=" visual"> visual</a> </p> <a href="https://publications.waset.org/abstracts/62109/a-comparison-of-anger-state-and-trait-anger-among-adolescents-with-and-without-visual-impairment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62109.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">413</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3082</span> Visual Improvement with Low Vision Aids in Children with Stargardt’s Disease</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anum%20Akhter">Anum Akhter</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumaira%20Altaf"> Sumaira Altaf</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: To study the effect of low vision devices i.e. telescope and magnifying glasses on distance visual acuity and near visual acuity of children with Stargardt’s disease. Setting: Low vision department, Alshifa Trust Eye Hospital, Rawalpindi, Pakistan. Methods: 52 children having Stargardt’s disease were included in the study. All children were diagnosed by pediatrics ophthalmologists. Comprehensive low vision assessment was done by me in Low vision clinic. Visual acuity was measured using ETDRS chart. Refraction and other supplementary tests were performed. Children with Stargardt’s disease were provided with different telescopes and magnifying glasses for improving far vision and near vision. Results: Out of 52 children, 17 children were males and 35 children were females. Distance visual acuity and near visual acuity improved significantly with low vision aid trial. All children showed visual acuity better than 6/19 with a telescope of higher magnification. Improvement in near visual acuity was also significant with magnifying glasses trial. Conclusions: Low vision aids are useful for improvement in visual acuity in children. Children with Stargardt’s disease who are having a problem in education and daily life activities can get help from low vision aids. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stargardt" title="Stargardt">Stargardt</a>, <a href="https://publications.waset.org/abstracts/search?q=s%20disease" title="s disease">s disease</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20vision%20aids" title=" low vision aids"> low vision aids</a>, <a href="https://publications.waset.org/abstracts/search?q=telescope" title=" telescope"> telescope</a>, <a href="https://publications.waset.org/abstracts/search?q=magnifiers" title=" magnifiers"> magnifiers</a> </p> <a href="https://publications.waset.org/abstracts/24382/visual-improvement-with-low-vision-aids-in-children-with-stargardts-disease" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">538</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3081</span> Social-Cognitive Aspects of Interpretation: Didactic Approaches in Language Processing and English as a Second Language Difficulties in Dyslexia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Schnell%20Zsuzsanna">Schnell Zsuzsanna</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: The interpretation of written texts, language processing in the visual domain, in other words, atypical reading abilities, also known as dyslexia, is an ever-growing phenomenon in today’s societies and educational communities. The much-researched problem affects cognitive abilities and, coupled with normal intelligence normally manifests difficulties in the differentiation of sounds and orthography and in the holistic processing of written words. The factors of susceptibility are varied: social, cognitive psychological, and linguistic factors interact with each other. Methods: The research will explain the psycholinguistics of dyslexia on the basis of several empirical experiments and demonstrate how domain-general abilities of inhibition, retrieval from the mental lexicon, priming, phonological processing, and visual modality transfer affect successful language processing and interpretation. Interpretation of visual stimuli is hindered, and the problem seems to be embedded in a sociocultural, psycholinguistic, and cognitive background. This makes the picture even more complex, suggesting that the understanding and resolving of the issues of dyslexia has to be interdisciplinary, aided by several disciplines in the field of humanities and social sciences, and should be researched from an empirical approach, where the practical, educational corollaries can be analyzed on an applied basis. Aim and applicability: The lecture sheds light on the applied, cognitive aspects of interpretation, social cognitive traits of language processing, the mental underpinnings of cognitive interpretation strategies in different languages (namely, Hungarian and English), offering solutions with a few applied techniques for success in foreign language learning that can be useful advice for the developers of testing methodologies and measures across ESL teaching and testing platforms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dyslexia" title="dyslexia">dyslexia</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20cognition" title=" social cognition"> social cognition</a>, <a href="https://publications.waset.org/abstracts/search?q=transparency" title=" transparency"> transparency</a>, <a href="https://publications.waset.org/abstracts/search?q=modalities" title=" modalities"> modalities</a> </p> <a href="https://publications.waset.org/abstracts/165654/social-cognitive-aspects-of-interpretation-didactic-approaches-in-language-processing-and-english-as-a-second-language-difficulties-in-dyslexia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165654.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">84</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3080</span> Pudhaiyal: A Maze-Based Treasure Hunt Game for Tamil Words</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aarthy%20Anandan">Aarthy Anandan</a>, <a href="https://publications.waset.org/abstracts/search?q=Anitha%20Narasimhan"> Anitha Narasimhan</a>, <a href="https://publications.waset.org/abstracts/search?q=Madhan%20Karky"> Madhan Karky</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Word-based games are popular in helping people to improve their vocabulary skills. Games like ‘word search’ and crosswords provide a smart way of increasing vocabulary skills. Word search games are fun to play, but also educational which actually helps to learn a language. Finding the words from word search puzzle helps the player to remember words in an easier way, and it also helps to learn the spellings of words. In this paper, we present a tile distribution algorithm for a Maze-Based Treasure Hunt Game 'Pudhaiyal’ for Tamil words, which describes how words can be distributed horizontally, vertically or diagonally in a 10 x 10 grid. Along with the tile distribution algorithm, we also present an algorithm for the scoring model of the game. The proposed game has been tested with 20,000 Tamil words. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pudhaiyal" title="Pudhaiyal">Pudhaiyal</a>, <a href="https://publications.waset.org/abstracts/search?q=Tamil%20word%20game" title=" Tamil word game"> Tamil word game</a>, <a href="https://publications.waset.org/abstracts/search?q=word%20search" title=" word search"> word search</a>, <a href="https://publications.waset.org/abstracts/search?q=scoring" title=" scoring"> scoring</a>, <a href="https://publications.waset.org/abstracts/search?q=maze" title=" maze"> maze</a>, <a href="https://publications.waset.org/abstracts/search?q=algorithm" title=" algorithm"> algorithm</a> </p> <a href="https://publications.waset.org/abstracts/81334/pudhaiyal-a-maze-based-treasure-hunt-game-for-tamil-words" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81334.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">440</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3079</span> Aspects of Semiotics in Contemporary Design: A Case Study on Dice Brand</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Laila%20Zahran%20Mohammed%20Alsibani">Laila Zahran Mohammed Alsibani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of the research is to understand the aspects of semiotics in contemporary designs by redesigning an Omani donut brand with localized cultural identity. To do so, visual identity samples of Dice brand of donuts in Oman has been selected to be a case study. This study conducted based on semiotic theory by using mixed method research tools which are: documentation analysis, interview and survey. The literature review concentrates on key areas of semiotics in visual elements used in the brand designs. Also, it spotlights on the categories of semiotics in visual design. In addition, this research explores the visual cues in brand identity. The objectives of the research are to investigate the aspects of semiotics in providing meaning to visual cues and to identify visual cues for each visual element. It is hoped that this study will have the contribution to a better understanding of the different ways of using semiotics in contemporary designs. Moreover, this research can be a review of further studies in understanding and explaining current and future design trends. Future research can also focus on how brand-related signs are perceived by consumers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brands" title="brands">brands</a>, <a href="https://publications.waset.org/abstracts/search?q=semiotics" title=" semiotics"> semiotics</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20arts" title=" visual arts"> visual arts</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20communication" title=" visual communication"> visual communication</a> </p> <a href="https://publications.waset.org/abstracts/158274/aspects-of-semiotics-in-contemporary-design-a-case-study-on-dice-brand" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158274.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3078</span> Constellating Images: Bilderatlases as a Tool to Develop Criticality towards Visual Culture</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Quirijn%20Menken">Quirijn Menken</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Menken, Q. Author  Constellating Images Abstract—We live in a predominantly visual era. Vastly expanded quantities of imagery influence us on a daily basis, in contrast to earlier days where the textual prevailed. The increasing producing and reproducing of images continuously compete for our attention. As such, how we perceive images and in what way images are framed or mediate our beliefs, has become of even greater importance than ever before. Especially in art education a critical awareness and approach of images as part of visual culture is of utmost importance. The Bilderatlas operates as a mediation, and offers new Ways of Seeing and knowing. It is mainly known as result of the ground-breaking work of the cultural theorist Aby Warburg, who intended to present an art history without words. His Mnemosyne Bilderatlas shows how the arrangement of images - and the interstices between them, offers new perspectives and ways of seeing. The Atlas as a medium to critically address Visual Culture is also practiced by the German artist Gerhard Richter, and it is in written form used in the Passagen Werk of Walter Benjamin. In order to examine the use of the Bilderatlas as a tool in art education, several experiments with art students have been conducted. These experiments have lead to an exploration of different Pedagogies, which help to offer new perspectives and trajectories of learning. To use the Bilderatlas as a tool to develop criticality towards Visual Culture, I developed and tested a new pedagogy; a Pedagogy of Difference and Repetition, based on the philosophy of Gilles Deleuze. Furthermore, in offering a new pedagogy - based on the rhizomatic work of Gilles Deleuze – the Bilderatlas as a tool to develop criticality has found a firm basis. Keywords—Art Education, Walter Benjamin, Bilderatlas, Gilles Deleuze, Difference and Repetition, Pedagogy, Rhizomes, Visual Culture, <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Art%20Education" title="Art Education">Art Education</a>, <a href="https://publications.waset.org/abstracts/search?q=Bilderatlas" title=" Bilderatlas"> Bilderatlas</a>, <a href="https://publications.waset.org/abstracts/search?q=Pedagogy" title=" Pedagogy"> Pedagogy</a>, <a href="https://publications.waset.org/abstracts/search?q=Aby%20Warburg" title=" Aby Warburg"> Aby Warburg</a> </p> <a href="https://publications.waset.org/abstracts/120920/constellating-images-bilderatlases-as-a-tool-to-develop-criticality-towards-visual-culture" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/120920.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3077</span> Development of Visual Element Design Guidelines for Consumer Products Based on User Characteristics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Taezoon%20Park">Taezoon Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Wonil%20Hwang"> Wonil Hwang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims to build a design guideline for the effective visual display used for consumer products considering user characteristics; gender and age. Although a number of basic experiments identified the limits of human visual perception, the findings remain fragmented and many times in an unfriendly form. This study compiled a design cases along with tables aggregated from the experimental result of visual perception; brightness/contrast, useful field of view, color sensitivity. Visual design elements commonly used for consumer product, were selected and appropriate guidelines were developed based on the experimental result. Since the provided data with case example suggests a feasible design space, it will save time for a product designer to find appropriate design alternatives. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=design%20guideline" title="design guideline">design guideline</a>, <a href="https://publications.waset.org/abstracts/search?q=consumer%20product" title=" consumer product"> consumer product</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20design%20element" title=" visual design element"> visual design element</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perception" title=" visual perception"> visual perception</a>, <a href="https://publications.waset.org/abstracts/search?q=emotional%20design" title=" emotional design"> emotional design</a> </p> <a href="https://publications.waset.org/abstracts/55080/development-of-visual-element-design-guidelines-for-consumer-products-based-on-user-characteristics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/55080.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">372</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=103">103</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=104">104</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=Bag%20of%20Visual%20Words%20%28BOVW%29&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10