CINXE.COM
Search results for: visual memory
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: visual memory</title> <meta name="description" content="Search results for: visual memory"> <meta name="keywords" content="visual memory"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="visual memory" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="visual memory"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2972</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: visual memory</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2972</span> The Involvement of Visual and Verbal Representations Within a Quantitative and Qualitative Visual Change Detection Paradigm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Laura%20Jenkins">Laura Jenkins</a>, <a href="https://publications.waset.org/abstracts/search?q=Tim%20Eschle"> Tim Eschle</a>, <a href="https://publications.waset.org/abstracts/search?q=Joanne%20Ciafone"> Joanne Ciafone</a>, <a href="https://publications.waset.org/abstracts/search?q=Colin%20Hamilton"> Colin Hamilton</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An original working memory model suggested the separation of visual and verbal systems in working memory architecture, in which only visual working memory components were used during visual working memory tasks. It was later suggested that the visuo spatial sketch pad was the only memory component at use during visual working memory tasks, and components such as the phonological loop were not considered. In more recent years, a contrasting approach has been developed with the use of an executive resource to incorporate both visual and verbal representations in visual working memory paradigms. This was supported using research demonstrating the use of verbal representations and an executive resource in a visual matrix patterns task. The aim of the current research is to investigate the working memory architecture during both a quantitative and a qualitative visual working memory task. A dual task method will be used. Three secondary tasks will be used which are designed to hit specific components within the working memory architecture – Dynamic Visual Noise (visual components), Visual Attention (spatial components) and Verbal Attention (verbal components). A comparison of the visual working memory tasks will be made to discover if verbal representations are at use, as the previous literature suggested. This direct comparison has not been made so far in the literature. Considerations will be made as to whether a domain specific approach should be employed when discussing visual working memory tasks, or whether a more domain general approach could be used instead. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=semantic%20organisation" title="semantic organisation">semantic organisation</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20memory" title=" visual memory"> visual memory</a>, <a href="https://publications.waset.org/abstracts/search?q=change%20detection" title=" change detection"> change detection</a> </p> <a href="https://publications.waset.org/abstracts/22696/the-involvement-of-visual-and-verbal-representations-within-a-quantitative-and-qualitative-visual-change-detection-paradigm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22696.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">595</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2971</span> Effects of the Visual and Auditory Stimuli with Emotional Content on Eyewitness Testimony</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=%C4%B0rem%20Bulut">İrem Bulut</a>, <a href="https://publications.waset.org/abstracts/search?q=Mustafa%20Z.%20%20S%C3%B6y%C3%BCk"> Mustafa Z. Söyük</a>, <a href="https://publications.waset.org/abstracts/search?q=Ertu%C4%9Frul%20Yal%C3%A7%C4%B1n"> Ertuğrul Yalçın</a>, <a href="https://publications.waset.org/abstracts/search?q=Simge%20%C5%9Ei%C5%9Fman-Bal"> Simge Şişman-Bal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Eyewitness testimony is one of the most frequently used methods in criminal cases for the determination of crime and perpetrator. In the literature, the number of studies about the reliability of eyewitness testimony is increasing. The study aims to reveal the factors that affect the short-term and long-term visual memory performance of the participants in the event of an accident. In this context, the effect of the emotional content of the accident and the sounds during the accident on visual memory performance was investigated with eye-tracking. According to the results, the presence of visual and auditory stimuli with emotional content during the accident decreases the participants' both short-term and long-term recall performance. Moreover, the data obtained from the eye monitoring device showed that the participants had difficulty in answering even the questions they focused on at the time of the accident. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=eye%20tracking" title="eye tracking">eye tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=eyewitness%20testimony" title=" eyewitness testimony"> eyewitness testimony</a>, <a href="https://publications.waset.org/abstracts/search?q=long-term%20recall" title=" long-term recall"> long-term recall</a>, <a href="https://publications.waset.org/abstracts/search?q=short-term%20recall" title=" short-term recall"> short-term recall</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20memory" title=" visual memory"> visual memory</a> </p> <a href="https://publications.waset.org/abstracts/115650/effects-of-the-visual-and-auditory-stimuli-with-emotional-content-on-eyewitness-testimony" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/115650.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">162</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2970</span> Visual Working Memory, Reading Abilities, and Vocabulary in Mexican Deaf Signers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Mondaca">A. Mondaca</a>, <a href="https://publications.waset.org/abstracts/search?q=E.%20Mendoza"> E. Mendoza</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20Jackson-Maldonado"> D. Jackson-Maldonado</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Garc%C3%ADa-Obreg%C3%B3n"> A. García-Obregón</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Deaf signers usually show lower scores in Auditory Working Memory (AWM) tasks and higher scores in Visual Working Memory (VWM) tasks than their hearing pairs. Further, Working Memory has been correlated with reading abilities and vocabulary in Deaf and Hearing individuals. The aim of the present study is to compare the performance of Mexican Deaf signers and hearing adults in VWM, reading and Vocabulary tasks and observe if the latter are correlated to the former. 15 Mexican Deaf signers were assessed using the Corsi block test for VWM, four different subtests of PROLEC (Batería de Evaluación de los Procesos Lectores) for reading abilities, and the LexTale in its Spanish version for vocabulary. T-tests show significant differences between groups for VWM and Vocabulary but not for all the PROLEC subtests. A significant Pearson correlation was found between VWM and Vocabulary but not between VWM and reading abilities. This work is part of a larger research study and results are not yet conclusive. A discussion about the use of PROLEC as a tool to explore reading abilities in a Deaf population is included. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deaf%20signers" title="deaf signers">deaf signers</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20working%20memory" title=" visual working memory"> visual working memory</a>, <a href="https://publications.waset.org/abstracts/search?q=reading" title=" reading"> reading</a>, <a href="https://publications.waset.org/abstracts/search?q=Mexican%20sign%20language" title=" Mexican sign language"> Mexican sign language</a> </p> <a href="https://publications.waset.org/abstracts/147842/visual-working-memory-reading-abilities-and-vocabulary-in-mexican-deaf-signers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147842.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">168</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2969</span> Association of Sensory Processing and Cognitive Deficits in Children with Autism Spectrum Disorders – Pioneer Study in Saudi Arabia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rana%20Zeina">Rana Zeina</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objective: The association between Sensory problems and cognitive abilities has been studied in individuals with Autism Spectrum Disorders (ASDs). In this study, we used a neuropsychological test to evaluate memory and attention in ASDs children with sensory problems compared to the ASDs children without sensory problems. Methods: Four visual memory tests of Cambridge Neuropsychological Test Automated Battery (CANTAB) including Big/Little Circle (BLC), Simple Reaction Time (SRT), Intra/Extra Dimensional Set Shift (IED), Spatial Recognition Memory (SRM), were administered to 14 ASDs children with sensory problems compared to 13 ASDs without sensory problems aged 3 to 12 with IQ of above 70. Results: ASDs Individuals with sensory problems performed worse than the ASDs group without sensory problems on comprehension, learning, reversal and simple reaction time tasks, and no significant difference between the two groups was recorded in terms of the visual memory and visual comprehension tasks. Conclusion: The findings of this study suggest that ASDs children with sensory problems are facing deficits in learning, comprehension, reversal, and speed of response to stimuli. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20memory" title="visual memory">visual memory</a>, <a href="https://publications.waset.org/abstracts/search?q=attention" title=" attention"> attention</a>, <a href="https://publications.waset.org/abstracts/search?q=autism%20spectrum%20disorders" title=" autism spectrum disorders"> autism spectrum disorders</a>, <a href="https://publications.waset.org/abstracts/search?q=CANTAB%20eclipse" title=" CANTAB eclipse"> CANTAB eclipse</a> </p> <a href="https://publications.waset.org/abstracts/6386/association-of-sensory-processing-and-cognitive-deficits-in-children-with-autism-spectrum-disorders-pioneer-study-in-saudi-arabia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6386.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">450</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2968</span> Memory Based Reinforcement Learning with Transformers for Long Horizon Timescales and Continuous Action Spaces</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shweta%20Singh">Shweta Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Sudaman%20Katti"> Sudaman Katti</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most well-known sequence models make use of complex recurrent neural networks in an encoder-decoder configuration. The model used in this research makes use of a transformer, which is based purely on a self-attention mechanism, without relying on recurrence at all. More specifically, encoders and decoders which make use of self-attention and operate based on a memory, are used. In this research work, results for various 3D visual and non-visual reinforcement learning tasks designed in Unity software were obtained. Convolutional neural networks, more specifically, nature CNN architecture, are used for input processing in visual tasks, and comparison with standard long short-term memory (LSTM) architecture is performed for both visual tasks based on CNNs and non-visual tasks based on coordinate inputs. This research work combines the transformer architecture with the proximal policy optimization technique used popularly in reinforcement learning for stability and better policy updates while training, especially for continuous action spaces, which are used in this research work. Certain tasks in this paper are long horizon tasks that carry on for a longer duration and require extensive use of memory-based functionalities like storage of experiences and choosing appropriate actions based on recall. The transformer, which makes use of memory and self-attention mechanism in an encoder-decoder configuration proved to have better performance when compared to LSTM in terms of exploration and rewards achieved. Such memory based architectures can be used extensively in the field of cognitive robotics and reinforcement learning. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=reinforcement%20learning" title=" reinforcement learning"> reinforcement learning</a>, <a href="https://publications.waset.org/abstracts/search?q=self-attention" title=" self-attention"> self-attention</a>, <a href="https://publications.waset.org/abstracts/search?q=transformers" title=" transformers"> transformers</a>, <a href="https://publications.waset.org/abstracts/search?q=unity" title=" unity"> unity</a> </p> <a href="https://publications.waset.org/abstracts/163301/memory-based-reinforcement-learning-with-transformers-for-long-horizon-timescales-and-continuous-action-spaces" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163301.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2967</span> Selective Effect of Occipital Alpha Transcranial Alternating Current Stimulation in Perception and Working Memory</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andreina%20Giustiniani">Andreina Giustiniani</a>, <a href="https://publications.waset.org/abstracts/search?q=Massimiliano%20Oliveri"> Massimiliano Oliveri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Rhythmic activity in different frequencies could subserve distinct functional roles during visual perception and visual mental imagery. In particular, alpha band activity is thought to play a role in active inhibition of both task-irrelevant regions and processing of non-relevant information. In the present blind placebo-controlled study we applied alpha transcranial alternating current stimulation (tACS) in the occipital cortex both during a basic visual perception and a visual working memory task. To understand if the role of alpha is more related to a general inhibition of distractors or to an inhibition of task-irrelevant regions, we added a non visual distraction to both the tasks.Sixteen adult volunteers performed both a simple perception and a working memory task during 10 Hz tACS. The electrodes were placed over the left and right occipital cortex, the current intensity was 1 mA peak-to-baseline. Sham stimulation was chosen as control condition and in order to elicit the skin sensation similar to the real stimulation, electrical stimulation was applied for short periods (30 s) at the beginning of the session and then turned off. The tasks were split in two sets, in one set distracters were included and in the other set, there were no distracters. Motor interference was added by changing the answer key after subjects completed the first set of trials.The results show that alpha tACS improves working memory only when no motor distracters are added, suggesting a role of alpha tACS in inhibiting non-relevant regions rather than in a general inhibition of distractors. Additionally, we found that alpha tACS does not affect accuracy and hit rates during the visual perception task. These results suggest that alpha activity in the occipital cortex plays a different role in perception and working memory and it could optimize performance in tasks in which attention is internally directed, as in this working memory paradigm, but only when there is not motor distraction. Moreover, alpha tACS improves working memory performance by means of inhibition of task-irrelevant regions while it does not affect perception. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=alpha%20activity" title="alpha activity">alpha activity</a>, <a href="https://publications.waset.org/abstracts/search?q=interference" title=" interference"> interference</a>, <a href="https://publications.waset.org/abstracts/search?q=perception" title=" perception"> perception</a>, <a href="https://publications.waset.org/abstracts/search?q=working%20memory" title=" working memory"> working memory</a> </p> <a href="https://publications.waset.org/abstracts/76939/selective-effect-of-occipital-alpha-transcranial-alternating-current-stimulation-in-perception-and-working-memory" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/76939.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">256</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2966</span> Effects of Aging on Auditory and Visual Recall Abilities</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rashmi%20D.%20G.">Rashmi D. G.</a>, <a href="https://publications.waset.org/abstracts/search?q=Aishwarya%20G."> Aishwarya G.</a>, <a href="https://publications.waset.org/abstracts/search?q=Niharika%20M.%20K."> Niharika M. K.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: Free recall tasks target cognitive and linguistic processes like episodic memory, lexical access and retrieval. Consequently, the free recall paradigm is suitable for assessing memory deterioration caused by aging; this also depends on linguistic factors, including the use of first and second languages and their relative ability. Hence, the present study aimed to determine if aging has an effect on visual and auditory recall abilities. Method: Twenty young adults (mean age: 25.4±0.99) and older adults (mean age: 63.3±3.51) participated in the study. Participants performed a free recall task under two conditions – related and unrelated and two modalities - visual and auditory where they were instructed to recall as many items as possible with no specific order and time limit. Results: Free recall performance was calculated as the mean number of correctly recalled items. Although younger participants recalled a higher number of items, the performance across conditions and modality was variable. Conclusion: In summary, the findings of the present study revealed an age-related decline in the efficiency of episodic memory, which is crucial to remember recent events. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=recall" title="recall">recall</a>, <a href="https://publications.waset.org/abstracts/search?q=episodic%20memory" title=" episodic memory"> episodic memory</a>, <a href="https://publications.waset.org/abstracts/search?q=aging" title=" aging"> aging</a>, <a href="https://publications.waset.org/abstracts/search?q=modality" title=" modality"> modality</a> </p> <a href="https://publications.waset.org/abstracts/163572/effects-of-aging-on-auditory-and-visual-recall-abilities" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163572.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2965</span> To Estimate the Association between Visual Stress and Visual Perceptual Skills</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vijay%20Reena%20Durai">Vijay Reena Durai</a>, <a href="https://publications.waset.org/abstracts/search?q=Krithica%20Srinivasan"> Krithica Srinivasan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: The two fundamental skills involved in the growth and wellbeing of any child can be categorized into visual motor and perceptual skills. Visual stress is a disorder which is characterized by visual discomfort, blurred vision, misspelling words, skipping lines, letters bunching together. There is a need to understand the deficits in perceptual skills among children with visual stress. Aim: To estimate the association between visual stress and visual perceptual skills Objective: To compare visual perceptual skills of children with and without visual stress Methodology: Children between 8 to 15 years of age participated in this cross-sectional study. All children with monocular visual acuity better than or equal to 6/6 were included. Visual perceptual skills were measured using test for visual perceptual skills (TVPS) tool. Reading speed was measured with the chosen colored overlay using Wilkins reading chart and pattern glare score was estimated using a 3cpd gratings. Visual stress was defined as change in reading speed of greater than or equal to 10% and a pattern glare score of greater than or equal to 4. Results: 252 children participated in this study and the male: female ratio of 3:2. Majority of the children preferred Magenta (28%) and Yellow (25%) colored overlay for reading. There was a significant difference between the two groups (MD=1.24±0.6) (p<0.04, 95% CI 0.01-2.43) only in the sequential memory skills. The prevalence of visual stress in this group was found to be 31% (n=78). Binary logistic regression showed that odds ratio of having poor visual perceptual skills was OR: 2.85 (95% CI 1.08-7.49) among children with visual stress. Conclusion: Children with visual stress are found to have three times poorer visual perceptual skills than children without visual stress. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20stress" title="visual stress">visual stress</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perceptual%20skills" title=" visual perceptual skills"> visual perceptual skills</a>, <a href="https://publications.waset.org/abstracts/search?q=colored%20overlay" title=" colored overlay"> colored overlay</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20glare" title=" pattern glare"> pattern glare</a> </p> <a href="https://publications.waset.org/abstracts/41580/to-estimate-the-association-between-visual-stress-and-visual-perceptual-skills" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">388</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2964</span> Memory Types in Hemodialysis Patients: A Study Based on Hemodialysis Duration, Zahedan, South East of Iran</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=B.%20Sabayan">B. Sabayan</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Alidadi"> A. Alidadi</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Ebrahimi"> S. Ebrahimi</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20M.%20Bakhshani"> N. M. Bakhshani </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Neuropsychological problems are more common in hemodialysis (HD) patients than in healthy individuals. The aim of this study was to investigate the effect of long term HD on memory types of HD patients. To assess the different type of memory, we used memory parts of the Persian Papers and Pencil Cognitive assessment package (PCAP) and Addenbrooke's Cognitive Examination (ACE-R). Our study included 80 HD patients of whom 39 had less than six months of HD and 41 patients and another group which had a history of HD more than six months. The population had a mean age of 51.60 years old and 27.5% of them were female. The scores of patients who have been hemodialyzed for a long time (median time of HD was up to 4 years) had lower score in anterograde, explicit, visual, recall and recognition memory (5.44±1.07, 9.49±3.472, 22.805±6.6913, 5.59±10.435, 11.02±3.190 score) than the HD patients who underwent HD for a shorter term, where the median time was 3 to 5 months (P<0.01). The regression result shows that, by increasing the HD duration, all memory types are reduced (R2=0.600, P<0.01). The present study demonstrated that HD patients who were under HD for a long time had significantly lower scores in the different types of memory. However, additional researches are needed in this area. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hemodialysis%20patients" title="hemodialysis patients">hemodialysis patients</a>, <a href="https://publications.waset.org/abstracts/search?q=duration%20of%20hemodialysis" title=" duration of hemodialysis"> duration of hemodialysis</a>, <a href="https://publications.waset.org/abstracts/search?q=memory%20types" title=" memory types"> memory types</a>, <a href="https://publications.waset.org/abstracts/search?q=Zahedan" title=" Zahedan"> Zahedan</a> </p> <a href="https://publications.waset.org/abstracts/83159/memory-types-in-hemodialysis-patients-a-study-based-on-hemodialysis-duration-zahedan-south-east-of-iran" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/83159.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">178</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2963</span> Attention and Memory in the Music Learning Process in Individuals with Visual Impairments</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lana%20Burmistrova">Lana Burmistrova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: The influence of visual impairments on several cognitive processes used in the music learning process is an increasingly important area in special education and cognitive musicology. Many children have several visual impairments due to the refractive errors and irreversible inhibitors. However, based on the compensatory neuroplasticity and functional reorganization, congenitally blind (CB) and early blind (EB) individuals use several areas of the occipital lobe to perceive and process auditory and tactile information. CB individuals have greater memory capacity, memory reliability, and less false memory mechanisms are used while executing several tasks, they have better working memory (WM) and short-term memory (STM). Blind individuals use several strategies while executing tactile and working memory n-back tasks: verbalization strategy (mental recall), tactile strategy (tactile recall) and combined strategies. Methods and design: The aim of the pilot study was to substantiate similar tendencies while executing attention, memory and combined auditory tasks in blind and sighted individuals constructed for this study, and to investigate attention, memory and combined mechanisms used in the music learning process. For this study eight (n=8) blind and eight (n=8) sighted individuals aged 13-20 were chosen. All respondents had more than five years music performance and music learning experience. In the attention task, all respondents had to identify pitch changes in tonal and randomized melodic pairs. The memory task was based on the mismatch negativity (MMN) proportion theory: 80 percent standard (not changed) and 20 percent deviant (changed) stimuli (sequences). Every sequence was named (na-na, ra-ra, za-za) and several items (pencil, spoon, tealight) were assigned for each sequence. Respondents had to recall the sequences, to associate them with the item and to detect possible changes. While executing the combined task, all respondents had to focus attention on the pitch changes and had to detect and describe these during the recall. Results and conclusion: The results support specific features in CB and EB, and similarities between late blind (LB) and sighted individuals. While executing attention and memory tasks, it was possible to observe the tendency in CB and EB by using more precise execution tactics and usage of more advanced periodic memory, while focusing on auditory and tactile stimuli. While executing memory and combined tasks, CB and EB individuals used passive working memory to recall standard sequences, active working memory to recall deviant sequences and combined strategies. Based on the observation results, assessment of blind respondents and recording specifics, following attention and memory correlations were identified: reflective attention and STM, reflective attention and periodic memory, auditory attention and WM, tactile attention and WM, auditory tactile attention and STM. The results and the summary of findings highlight the attention and memory features used in the music learning process in the context of blindness, and the tendency of the several attention and memory types correlated based on the task, strategy and individual features. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=attention" title="attention">attention</a>, <a href="https://publications.waset.org/abstracts/search?q=blindness" title=" blindness"> blindness</a>, <a href="https://publications.waset.org/abstracts/search?q=memory" title=" memory"> memory</a>, <a href="https://publications.waset.org/abstracts/search?q=music%20learning" title=" music learning"> music learning</a>, <a href="https://publications.waset.org/abstracts/search?q=strategy" title=" strategy"> strategy</a> </p> <a href="https://publications.waset.org/abstracts/95997/attention-and-memory-in-the-music-learning-process-in-individuals-with-visual-impairments" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95997.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">184</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2962</span> Employing Visual Culture to Enhance Initial Adult Maltese Language Acquisition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jacqueline%20%C5%BBammit">Jacqueline Żammit</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recent research indicates that the utilization of right-brain strategies holds significant implications for the acquisition of language skills. Nevertheless, the utilization of visual culture as a means to stimulate these strategies and amplify language retention among adults engaging in second language (L2) learning remains a relatively unexplored area. This investigation delves into the impact of visual culture on activating right-brain processes during the initial stages of language acquisition, particularly in the context of teaching Maltese as a second language (ML2) to adult learners. By employing a qualitative research approach, this study convenes a focus group comprising twenty-seven educators to delve into a range of visual culture techniques integrated within language instruction. The collected data is subjected to thematic analysis using NVivo software. The findings underscore a variety of impactful visual culture techniques, encompassing activities such as drawing, sketching, interactive matching games, orthographic mapping, memory palace strategies, wordless picture books, picture-centered learning methodologies, infographics, Face Memory Game, Spot the Difference, Word Search Puzzles, the Hidden Object Game, educational videos, the Shadow Matching technique, Find the Differences exercises, and color-coded methodologies. These identified techniques hold potential for application within ML2 classes for adult learners. Consequently, this study not only provides insights into optimizing language learning through specific visual culture strategies but also furnishes practical recommendations for enhancing language competencies and skills. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20culture" title="visual culture">visual culture</a>, <a href="https://publications.waset.org/abstracts/search?q=right-brain%20strategies" title=" right-brain strategies"> right-brain strategies</a>, <a href="https://publications.waset.org/abstracts/search?q=second%20language%20acquisition" title=" second language acquisition"> second language acquisition</a>, <a href="https://publications.waset.org/abstracts/search?q=maltese%20as%20a%20second%20language" title=" maltese as a second language"> maltese as a second language</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20aids" title=" visual aids"> visual aids</a>, <a href="https://publications.waset.org/abstracts/search?q=language-based%20activities" title=" language-based activities"> language-based activities</a> </p> <a href="https://publications.waset.org/abstracts/171163/employing-visual-culture-to-enhance-initial-adult-maltese-language-acquisition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171163.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">61</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2961</span> The Contemporary Visual Spectacle: Critical Visual Literacy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lai-Fen%20Yang">Lai-Fen Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this increasingly visual world, how can we best decipher and understand the many ways that our everyday lives are organized around looking practices and the many images we encounter each day? Indeed, how we interact with and interpret visual images is a basic component of human life. Today, however, we are living in one of the most artificial visual and image-saturated cultures in human history, which makes understanding the complex construction and multiple social functions of visual imagery more important than ever before. Themes regarding our experience of a visually pervasive mediated culture, here, termed visual spectacle. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20culture" title="visual culture">visual culture</a>, <a href="https://publications.waset.org/abstracts/search?q=contemporary" title=" contemporary"> contemporary</a>, <a href="https://publications.waset.org/abstracts/search?q=images" title=" images"> images</a>, <a href="https://publications.waset.org/abstracts/search?q=literacy" title=" literacy"> literacy</a> </p> <a href="https://publications.waset.org/abstracts/9045/the-contemporary-visual-spectacle-critical-visual-literacy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9045.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">513</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2960</span> Development of Visual Working Memory Precision: A Cross-Sectional Study of Simultaneously Delayed Responses Paradigm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yao%20Fu">Yao Fu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xingli%20Zhang"> Xingli Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jiannong%20Shi"> Jiannong Shi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual working memory (VWM) capacity is the ability to maintain and manipulate short-term information which is not currently available. It is well known for its significance to form the basis of numerous cognitive abilities and its limitation in holding information. VWM span, the most popular measurable indicator, is found to reach the adult level (3-4 items) around 12-13 years’ old, while less is known about the precision development of the VWM capacity. By using simultaneously delayed responses paradigm, the present study investigates the development of VWM precision among 6-18-year-old children and young adults, besides its possible relationships with fluid intelligence and span. Results showed that precision and span both increased with age, and precision reached the maximum in 16-17 age-range. Moreover, when remembering 3 simultaneously presented items, the probability of remembering target item correlated with fluid intelligence and the probability of wrap errors (misbinding target and non-target items) correlated with age. When remembering more items, children had worse performance than adults due to their wrap errors. Compared to span, VWM precision was effective predictor of intelligence even after controlling for age. These results suggest that unlike VWM span, precision developed in a slow, yet longer fashion. Moreover, decreasing probability of wrap errors might be the main reason for the development of precision. Last, precision correlated more closely with intelligence than span in childhood and adolescence, which might be caused by the probability of remembering target item. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fluid%20intelligence" title="fluid intelligence">fluid intelligence</a>, <a href="https://publications.waset.org/abstracts/search?q=precision" title=" precision"> precision</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20working%20memory" title=" visual working memory"> visual working memory</a>, <a href="https://publications.waset.org/abstracts/search?q=wrap%20errors" title=" wrap errors"> wrap errors</a> </p> <a href="https://publications.waset.org/abstracts/72654/development-of-visual-working-memory-precision-a-cross-sectional-study-of-simultaneously-delayed-responses-paradigm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72654.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">276</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2959</span> Real-Time Episodic Memory Construction for Optimal Action Selection in Cognitive Robotics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Deon%20de%20Jager">Deon de Jager</a>, <a href="https://publications.waset.org/abstracts/search?q=Yahya%20Zweiri"> Yahya Zweiri</a>, <a href="https://publications.waset.org/abstracts/search?q=Dimitrios%20Makris"> Dimitrios Makris</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The three most important components in the cognitive architecture for cognitive robotics is memory representation, memory recall, and action-selection performed by the executive. In this paper, action selection, performed by the executive, is defined as a memory quantification and optimization process. The methodology describes the real-time construction of episodic memory through semantic memory optimization. The optimization is performed by set-based particle swarm optimization, using an adaptive entropy memory quantification approach for fitness evaluation. The performance of the approach is experimentally evaluated by simulation, where a UAV is tasked with the collection and delivery of a medical package. The experiments show that the UAV dynamically uses the episodic memory to autonomously control its velocity, while successfully completing its mission. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cognitive%20robotics" title="cognitive robotics">cognitive robotics</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20memory" title=" semantic memory"> semantic memory</a>, <a href="https://publications.waset.org/abstracts/search?q=episodic%20memory" title=" episodic memory"> episodic memory</a>, <a href="https://publications.waset.org/abstracts/search?q=maximum%20entropy%20principle" title=" maximum entropy principle"> maximum entropy principle</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20swarm%20optimization" title=" particle swarm optimization"> particle swarm optimization</a> </p> <a href="https://publications.waset.org/abstracts/114710/real-time-episodic-memory-construction-for-optimal-action-selection-in-cognitive-robotics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/114710.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">156</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2958</span> Correlation between Speech Emotion Recognition Deep Learning Models and Noises</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Leah%20Lee">Leah Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper examines the correlation between deep learning models and emotions with noises to see whether or not noises mask emotions. The deep learning models used are plain convolutional neural networks (CNN), auto-encoder, long short-term memory (LSTM), and Visual Geometry Group-16 (VGG-16). Emotion datasets used are Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Crowd-sourced Emotional Multimodal Actors Dataset (CREMA-D), Toronto Emotional Speech Set (TESS), and Surrey Audio-Visual Expressed Emotion (SAVEE). To make it four times bigger, audio set files, stretch, and pitch augmentations are utilized. From the augmented datasets, five different features are extracted for inputs of the models. There are eight different emotions to be classified. Noise variations are white noise, dog barking, and cough sounds. The variation in the signal-to-noise ratio (SNR) is 0, 20, and 40. In summation, per a deep learning model, nine different sets with noise and SNR variations and just augmented audio files without any noises will be used in the experiment. To compare the results of the deep learning models, the accuracy and receiver operating characteristic (ROC) are checked. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=auto-encoder" title="auto-encoder">auto-encoder</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=long%20short-term%20memory" title=" long short-term memory"> long short-term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=speech%20emotion%20recognition" title=" speech emotion recognition"> speech emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20geometry%20group-16" title=" visual geometry group-16"> visual geometry group-16</a> </p> <a href="https://publications.waset.org/abstracts/170547/correlation-between-speech-emotion-recognition-deep-learning-models-and-noises" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170547.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">75</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2957</span> Applications of Visual Ethnography in Public Anthropology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Subramaniam%20Panneerselvam">Subramaniam Panneerselvam</a>, <a href="https://publications.waset.org/abstracts/search?q=Gunanithi%20Perumal"> Gunanithi Perumal</a>, <a href="https://publications.waset.org/abstracts/search?q=KP%20Subin"> KP Subin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Visual Ethnography is used to document the culture of a community through a visual means. It could be either photography or audio-visual documentation. The visual ethnographic techniques are widely used in visual anthropology. The visual anthropologists use the camera to capture the cultural image of the studied community. There is a scope for subjectivity while the culture is documented by an external person. But the upcoming of the public anthropology provides an opportunity for the participants to document their own culture. There is a need to equip the participants with the skill of doing visual ethnography. The mobile phone technology provides visual documentation facility to everyone to capture the moments instantly. The visual ethnography facilitates the multiple-interpretation for the audiences. This study explores the effectiveness of visual ethnography among the tribal youth through public anthropology perspective. The case study was conducted to equip the tribal youth of Nilgiris in visual ethnography and the outcome of the experiment shared in this paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20ethnography" title="visual ethnography">visual ethnography</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20anthropology" title=" visual anthropology"> visual anthropology</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20anthropology" title=" public anthropology"> public anthropology</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple-interpretation" title=" multiple-interpretation"> multiple-interpretation</a>, <a href="https://publications.waset.org/abstracts/search?q=case%20study" title=" case study"> case study</a> </p> <a href="https://publications.waset.org/abstracts/127577/applications-of-visual-ethnography-in-public-anthropology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127577.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">183</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2956</span> Retrieval-Induced Forgetting Effects in Retrospective and Prospective Memory in Normal Aging: An Experimental Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Merve%20Akca">Merve Akca</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retrieval-induced forgetting (RIF) refers to the phenomenon that selective retrieval of some information impairs memory for related, but not previously retrieved information. Despite age differences in retrieval-induced forgetting regarding retrospective memory being documented, this research aimed to highlight age differences in RIF of the prospective memory tasks for the first time. By using retrieval-practice paradigm, this study comparatively examined RIF effects in retrospective memory and event-based prospective memory in young and old adults. In this experimental study, a mixed factorial design with age group (Young, Old) as a between-subject variable, and memory type (Prospective, Retrospective) and item type (Practiced, Non-practiced) as within-subject variables was employed. Retrieval-induced forgetting was observed in the retrospective but not in the prospective memory task. Therefore, the results indicated that selective retrieval of past events led to suppression of other related past events in both age groups but not the suppression of memory for future intentions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=prospective%20memory" title="prospective memory">prospective memory</a>, <a href="https://publications.waset.org/abstracts/search?q=retrieval-induced%20forgetting" title=" retrieval-induced forgetting"> retrieval-induced forgetting</a>, <a href="https://publications.waset.org/abstracts/search?q=retrieval%20inhibition" title=" retrieval inhibition"> retrieval inhibition</a>, <a href="https://publications.waset.org/abstracts/search?q=retrospective%20memory" title=" retrospective memory"> retrospective memory</a> </p> <a href="https://publications.waset.org/abstracts/57915/retrieval-induced-forgetting-effects-in-retrospective-and-prospective-memory-in-normal-aging-an-experimental-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/57915.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">316</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2955</span> Combined Use of FMRI and Voxel-Based Morphometry in Assessment of Memory Impairment in Alzheimer's Disease Patients</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20V.%20Sokolov">A. V. Sokolov</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20V.%20Vorobyev"> S. V. Vorobyev</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Yu.%20Efimtcev"> A. Yu. Efimtcev</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20Yu.%20Lobzin"> V. Yu. Lobzin</a>, <a href="https://publications.waset.org/abstracts/search?q=I.%20A.%20Lupanov"> I. A. Lupanov</a>, <a href="https://publications.waset.org/abstracts/search?q=O.%20A.%20Cherdakov"> O. A. Cherdakov</a>, <a href="https://publications.waset.org/abstracts/search?q=V.%20A.%20Fokin"> V. A. Fokin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Alzheimer’s disease (AD) is the most common form of dementia. Different brain regions are involved to the pathological process of AD. The purpose of this study was to evaluate brain activation by visual memory task in patients with Alzheimer's disease and determine correlation between memory impairment and atrophy of memory specific brain regions of frontal and medial temporal lobes. To investigate the organization of memory and localize cortical areas activated by visual memory task we used functional magnetic resonance imaging and to evaluate brain atrophy of patients with Alzheimer's disease we used voxel-based morphometry. FMRI was performed on 1.5 T MR-scanner Siemens Magnetom Symphony with BOLD (Blood Oxygenation Level Dependent) technique, based on distinctions of magnetic properties of hemoglobin. For test stimuli we used series of 12 not related images for "Baseline" and 12 images with 6 presented before for "Active". Stimuli were presented 3 times with reduction of repeated images to 4 and 2. Patients with Alzheimer's disease showed less activation in hippocampal formation (HF) region and parahippocampal gyrus then healthy persons of control group (p<0.05). The study also showed reduced activation in posterior cingulate cortex (p<0.001). Voxel-based morphometry showed significant atrophy of grey matter in Alzheimer’s disease patients, especially of both temporal lobes (fusiform and parahippocampal gyri); frontal lobes (posterior cingulate and superior frontal gyri). The study showed correlation between memory impairment and atrophy of memory specific brain regions of frontal and medial temporal lobes. Thus, reduced activation in hippocampal formation and parahippocampal gyri, in posterior cingulate gyrus in patients with Alzheimer's disease correlates to significant atrophy of these regions, detected by voxel-based morphometry, and to deterioration of specific cognitive functions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alzheimer%E2%80%99s%20disease" title="Alzheimer’s disease">Alzheimer’s disease</a>, <a href="https://publications.waset.org/abstracts/search?q=functional%20MRI" title=" functional MRI"> functional MRI</a>, <a href="https://publications.waset.org/abstracts/search?q=voxel-based%20morphometry" title=" voxel-based morphometry"> voxel-based morphometry</a> </p> <a href="https://publications.waset.org/abstracts/18475/combined-use-of-fmri-and-voxel-based-morphometry-in-assessment-of-memory-impairment-in-alzheimers-disease-patients" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/18475.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">320</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2954</span> The Analogy of Visual Arts and Visual Literacy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lindelwa%20Pepu">Lindelwa Pepu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual Arts and Visual Literacy are defined with distinction from one another. Visual Arts are known for art forms such as drawing, painting, and photography, just to name a few. At the same time, Visual Literacy is known for learning through images. The Visual Literacy phenomenon may be attributed to the use of images was first established for creating memories and enjoyment. As time evolved, images became the center and essential means of making contact between people. Gradually, images became a means for interpreting and understanding words through visuals, that being Visual Arts. The purpose of this study is to present the analogy of the two terms Visual Arts and Visual Literacy, which are defined and compared through early practicing visual artists as well as relevant researchers to reveal how they interrelate with one another. This is a qualitative study that uses an interpretive approach as it seeks to understand and explain the interest of the study. The results reveal correspondence of the analogy between the two terms through various writers of early and recent years. This study recommends the significance of the two terms and the role they play in relation to other fields of study. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20arts" title="visual arts">visual arts</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20literacy" title=" visual literacy"> visual literacy</a>, <a href="https://publications.waset.org/abstracts/search?q=pictures" title=" pictures"> pictures</a>, <a href="https://publications.waset.org/abstracts/search?q=images" title=" images"> images</a> </p> <a href="https://publications.waset.org/abstracts/165940/the-analogy-of-visual-arts-and-visual-literacy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165940.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2953</span> The Characterisation of TLC NAND Flash Memory, Leading to a Definable Endurance/Retention Trade-Off</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sorcha%20Bennett">Sorcha Bennett</a>, <a href="https://publications.waset.org/abstracts/search?q=Joe%20Sullivan"> Joe Sullivan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Triple-Level Cell (TLC) NAND Flash memory at, and below, 20nm (nanometer) is still largely unexplored by researchers, and with the ever more commonplace existence of Flash in consumer and enterprise applications there is a need for such gaps in knowledge to be filled. At the time of writing, there was little published data or literature on TLC, and more specifically reliability testing, with a further emphasis on both endurance and retention. This paper will give an introduction to NAND Flash memory, followed by an overview of the relevant current research on the reliability of Flash memory, along with the planned future work which will provide results to help characterise the reliability of TLC memory. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=endurance" title="endurance">endurance</a>, <a href="https://publications.waset.org/abstracts/search?q=patterns" title=" patterns"> patterns</a>, <a href="https://publications.waset.org/abstracts/search?q=raw%20flash" title=" raw flash"> raw flash</a>, <a href="https://publications.waset.org/abstracts/search?q=reliability" title=" reliability"> reliability</a>, <a href="https://publications.waset.org/abstracts/search?q=retention" title=" retention"> retention</a>, <a href="https://publications.waset.org/abstracts/search?q=TLC%20NAND%20flash%20memory" title=" TLC NAND flash memory"> TLC NAND flash memory</a>, <a href="https://publications.waset.org/abstracts/search?q=trade-off" title=" trade-off"> trade-off</a> </p> <a href="https://publications.waset.org/abstracts/45350/the-characterisation-of-tlc-nand-flash-memory-leading-to-a-definable-enduranceretention-trade-off" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/45350.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">359</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2952</span> An Analysis of the Temporal Aspects of Visual Attention Processing Using Rapid Series Visual Processing (RSVP) Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shreya%20Borthakur">Shreya Borthakur</a>, <a href="https://publications.waset.org/abstracts/search?q=Aastha%20Vartak"> Aastha Vartak</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This Electroencephalogram (EEG) project on Rapid Visual Serial Processing (RSVP) paradigm explores the temporal dynamics of visual attention processing in response to rapidly presented visual stimuli. The study builds upon previous research that used real-world images in RSVP tasks to understand the emergence of object representations in the human brain. The objectives of the research include investigating the differences in accuracy and reaction times between 5 Hz and 20 Hz presentation rates, as well as examining the prominent brain waves, particularly alpha and beta waves, associated with the attention task. The pre-processing and data analysis involves filtering EEG data, creating epochs for target stimuli, and conducting statistical tests using MATLAB, EEGLAB, Chronux toolboxes, and R. The results support the hypotheses, revealing higher accuracy at a slower presentation rate, faster reaction times for less complex targets, and the involvement of alpha and beta waves in attention and cognitive processing. This research sheds light on how short-term memory and cognitive control affect visual processing and could have practical implications in fields like education. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=RSVP" title="RSVP">RSVP</a>, <a href="https://publications.waset.org/abstracts/search?q=attention" title=" attention"> attention</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20processing" title=" visual processing"> visual processing</a>, <a href="https://publications.waset.org/abstracts/search?q=attentional%20blink" title=" attentional blink"> attentional blink</a>, <a href="https://publications.waset.org/abstracts/search?q=EEG" title=" EEG"> EEG</a> </p> <a href="https://publications.waset.org/abstracts/169655/an-analysis-of-the-temporal-aspects-of-visual-attention-processing-using-rapid-series-visual-processing-rsvp-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169655.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">69</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2951</span> The Effect of Visual Access to Greenspace and Urban Space on a False Memory Learning Task</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bryony%20Pound">Bryony Pound</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study investigated how views of green or urban space affect learning performance. It provides evidence of the value of visual access to greenspace in work and learning environments, and builds on the extensive research into the cognitive and learning-related benefits of access to green and natural spaces, particularly in learning environments. It demonstrates that benefits of visual access to natural spaces whilst learning can produce statistically significant faster responses than those facing urban views after only 5 minutes. The primary hypothesis of this research was that a greenspace view would improve short-term learning. Participants were randomly assigned to either a view of parkland or of urban buildings from the same room. They completed a psychological test of two stages. The first stage consisted of a presentation of words from eight different categories (four manmade and four natural). Following this a 2.5 minute break was given; participants were not prompted to look out of the window, but all were observed doing so. The second stage of the test involved a word recognition/false memory test of three types. Type 1 was presented words from each category; Type 2 was non-presented words from those same categories; and Type 3 was non-presented words from different categories. Participants were asked to respond with whether they thought they had seen the words before or not. Accuracy of responses and reaction times were recorded. The key finding was that reaction times for Type 2 words (highest difficulty) were significantly different between urban and green view conditions. Those with an urban view had slower reaction times for these words, so a view of greenspace resulted in better information retrieval for word and false memory recognition. Importantly, this difference was found after only 5 minutes of exposure to either view, during winter, and with a sample size of only 26. Greenspace views improve performance in a learning task. This provides a case for better visual access to greenspace in work and learning environments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=benefits" title="benefits">benefits</a>, <a href="https://publications.waset.org/abstracts/search?q=greenspace" title=" greenspace"> greenspace</a>, <a href="https://publications.waset.org/abstracts/search?q=learning" title=" learning"> learning</a>, <a href="https://publications.waset.org/abstracts/search?q=restoration" title=" restoration"> restoration</a> </p> <a href="https://publications.waset.org/abstracts/85294/the-effect-of-visual-access-to-greenspace-and-urban-space-on-a-false-memory-learning-task" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/85294.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2950</span> Audio-Visual Recognition Based on Effective Model and Distillation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Heng%20Yang">Heng Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Tao%20Luo"> Tao Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Yakun%20Zhang"> Yakun Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kai%20Wang"> Kai Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei%20Qin"> Wei Qin</a>, <a href="https://publications.waset.org/abstracts/search?q=Liang%20Xie"> Liang Xie</a>, <a href="https://publications.waset.org/abstracts/search?q=Ye%20Yan"> Ye Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=Erwei%20Yin"> Erwei Yin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recent years have seen that audio-visual recognition has shown great potential in a strong noise environment. The existing method of audio-visual recognition has explored methods with ResNet and feature fusion. However, on the one hand, ResNet always occupies a large amount of memory resources, restricting the application in engineering. On the other hand, the feature merging also brings some interferences in a high noise environment. In order to solve the problems, we proposed an effective framework with bidirectional distillation. At first, in consideration of the good performance in extracting of features, we chose the light model, Efficientnet as our extractor of spatial features. Secondly, self-distillation was applied to learn more information from raw data. Finally, we proposed a bidirectional distillation in decision-level fusion. In more detail, our experimental results are based on a multi-model dataset from 24 volunteers. Eventually, the lipreading accuracy of our framework was increased by 2.3% compared with existing systems, and our framework made progress in audio-visual fusion in a high noise environment compared with the system of audio recognition without visual. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lipreading" title="lipreading">lipreading</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual" title=" audio-visual"> audio-visual</a>, <a href="https://publications.waset.org/abstracts/search?q=Efficientnet" title=" Efficientnet"> Efficientnet</a>, <a href="https://publications.waset.org/abstracts/search?q=distillation" title=" distillation"> distillation</a> </p> <a href="https://publications.waset.org/abstracts/146625/audio-visual-recognition-based-on-effective-model-and-distillation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2949</span> Design and Implementation of a Memory Safety Isolation Method Based on the Xen Cloud Environment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dengpan%20Wu">Dengpan Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Dan%20Liu"> Dan Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In view of the present cloud security problem has increasingly become one of the major obstacles hindering the development of the cloud computing, put forward a kind of memory based on Xen cloud environment security isolation technology implementation. And based on Xen virtual machine monitor system, analysis of the model of memory virtualization is implemented, using Xen memory virtualization system mechanism of super calls and grant table, based on the virtual machine manager internal implementation of access control module (ACM) to design the security isolation system memory. Experiments show that, the system can effectively isolate different customer domain OS between illegal access to memory data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cloud%20security" title="cloud security">cloud security</a>, <a href="https://publications.waset.org/abstracts/search?q=memory%20isolation" title=" memory isolation"> memory isolation</a>, <a href="https://publications.waset.org/abstracts/search?q=xen" title=" xen"> xen</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20machine" title=" virtual machine"> virtual machine</a> </p> <a href="https://publications.waset.org/abstracts/22897/design-and-implementation-of-a-memory-safety-isolation-method-based-on-the-xen-cloud-environment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22897.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">409</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2948</span> The Differences and Similarities in Neurocognitive Deficits in Mild Traumatic Brain Injury and Depression</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Boris%20Ershov">Boris Ershov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Depression is the most common mood disorder experienced by patients who have sustained a traumatic brain injury (TBI) and is associated with poorer cognitive functional outcomes. However, in some cases, similar cognitive impairments can also be observed in depression. There is not enough information about the features of the cognitive deficit in patients with TBI in relation to patients with depression. TBI patients without depressive symptoms (TBInD, n25), TBI patients with depressive symptoms (TBID, n31), and 28 patients with bipolar II disorder (BP) were included in the study. There were no significant differences in participants in respect to age, handedness and educational level. The patients clinical status was determined by using Montgomery–Asberg Depression Rating Scale (MADRS). All participants completed a cognitive battery (The Brief Assessment of Cognition in Affective Disorders (BAC-A)). Additionally, the Rey–Osterrieth Complex Figure (ROCF) was used to assess visuospatial construction abilities and visual memory, as well as planning and organizational skills. Compared to BP, TBInD and TBID showed a significant impairments in visuomotor abilities, verbal and visual memory. There were no significant differences between BP and TBID groups in working memory, speed of information processing, problem solving. Interference effect (cognitive inhibition) was significantly greater in TBInD and TBID compared to BP. Memory bias towards mood-related information in BP and TBID was greater in comparison with TBInD. These results suggest that depressive symptoms are associated with impairments some executive functions in combination at decrease of speed of information processing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bipolar%20II%20disorder" title="bipolar II disorder">bipolar II disorder</a>, <a href="https://publications.waset.org/abstracts/search?q=depression" title=" depression"> depression</a>, <a href="https://publications.waset.org/abstracts/search?q=neurocognitive%20deficits" title=" neurocognitive deficits"> neurocognitive deficits</a>, <a href="https://publications.waset.org/abstracts/search?q=traumatic%20brain%20injury" title=" traumatic brain injury"> traumatic brain injury</a> </p> <a href="https://publications.waset.org/abstracts/59107/the-differences-and-similarities-in-neurocognitive-deficits-in-mild-traumatic-brain-injury-and-depression" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">347</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2947</span> The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mathew%20Wakefield">Mathew Wakefield</a>, <a href="https://publications.waset.org/abstracts/search?q=Matthew%20Mitchell"> Matthew Mitchell</a>, <a href="https://publications.waset.org/abstracts/search?q=Lisa%20Wise"> Lisa Wise</a>, <a href="https://publications.waset.org/abstracts/search?q=Christopher%20McCarthy"> Christopher McCarthy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The properties of memory representations in artificial neural networks have cognitive implications. Distributed representations that encode instances as a pattern of activity across layers of nodes afford memory compression and enforce the selection of a single point in instance space. These encoding schemes also appear to distort the representational space, as well as trading off the ability to validate that input information is within the bounds of past experience. In contrast, a localist representation which encodes some meaningful information into individual nodes in a network layer affords less memory compression while retaining the integrity of the representational space. This allows the validity of an input to be determined. The validity (or familiarity) of input along with the capacity of localist representation for multiple instance selections affords a memory sampling approach that dynamically balances the bias-variance trade-off. When the input is familiar, bias may be high by referring only to the most similar instances in memory. When the input is less familiar, variance can be increased by referring to more instances that capture a broader range of features. Using this approach in a localist instance memory network, an experiment demonstrates a relationship between representational conflict, generalization performance, and memorization demand. Relatively small sampling ranges produce the best performance on a classic machine learning dataset of visual objects. Combining memory validity with conflict detection produces a reliable confidence judgement that can separate responses with high and low error rates. Confidence can also be used to signal the need for supervisory input. Using this judgement, the need for supervised learning as well as memory encoding can be substantially reduced with only a trivial detriment to classification performance. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=artificial%20neural%20networks" title="artificial neural networks">artificial neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=representation" title=" representation"> representation</a>, <a href="https://publications.waset.org/abstracts/search?q=memory" title=" memory"> memory</a>, <a href="https://publications.waset.org/abstracts/search?q=conflict%20monitoring" title=" conflict monitoring"> conflict monitoring</a>, <a href="https://publications.waset.org/abstracts/search?q=confidence" title=" confidence"> confidence</a> </p> <a href="https://publications.waset.org/abstracts/141391/the-relationship-between-representational-conflicts-generalization-and-encoding-requirements-in-an-instance-memory-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141391.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2946</span> Trimma: Trimming Metadata Storage and Latency for Hybrid Memory Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yiwei%20Li">Yiwei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Boyu%20Tian"> Boyu Tian</a>, <a href="https://publications.waset.org/abstracts/search?q=Mingyu%20Gao"> Mingyu Gao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Hybrid main memory systems combine both performance and capacity advantages from heterogeneous memory technologies. With larger capacities, higher associativities, and finer granularities, hybrid memory systems currently exhibit significant metadata storage and lookup overheads for flexibly remapping data blocks between the two memory tiers. To alleviate the inefficiencies of existing designs, we propose Trimma, the combination of a multi-level metadata structure and an efficient metadata cache design. Trimma uses a multilevel metadata table to only track truly necessary address remap entries. The saved memory space is effectively utilized as extra DRAM cache capacity to improve performance. Trimma also uses separate formats to store the entries with non-identity and identity mappings. This improves the overall remap cache hit rate, further boosting the performance. Trimma is transparent to software and compatible with various types of hybrid memory systems. When evaluated on a representative DDR4 + NVM hybrid memory system, Trimma achieves up to 2.4× and on average 58.1% speedup benefits, compared with a state-of-the-art design that only leverages the unallocated fast memory space for caching. Trimma addresses metadata management overheads and targets future scalable large-scale hybrid memory architectures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=memory%20system" title="memory system">memory system</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20cache" title=" data cache"> data cache</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20memory" title=" hybrid memory"> hybrid memory</a>, <a href="https://publications.waset.org/abstracts/search?q=non-volatile%20memory" title=" non-volatile memory"> non-volatile memory</a> </p> <a href="https://publications.waset.org/abstracts/183183/trimma-trimming-metadata-storage-and-latency-for-hybrid-memory-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183183.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">78</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2945</span> Short-Term and Working Memory Differences Across Age and Gender in Children</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Farzaneh%20Badinloo">Farzaneh Badinloo</a>, <a href="https://publications.waset.org/abstracts/search?q=Niloufar%20Jalali-Moghadam"> Niloufar Jalali-Moghadam</a>, <a href="https://publications.waset.org/abstracts/search?q=Reza%20Kormi-Nouri"> Reza Kormi-Nouri</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of this study was to explore the short-term and working memory performances across age and gender in school aged children. Most of the studies have been interested in looking into memory changes in adult subjects. This study was instead focused on exploring both short-term and working memories of children over time. Totally 410 school child participants belonging to four age groups (approximately 8, 10, 12 and 14 years old) among which were 201 girls and 208 boys were employed in the study. digits forward and backward tests of the Wechsler children intelligence scale-revised were conducted respectively as short-term and working memory measures. According to results, there was found a general increment in both short-term and working memory scores across age (p ˂ .05) by which whereas short-term memory performance was shown to increase up to 12 years old, working memory scores showed no significant increase after 10 years old of age. No difference was observed in terms of gender (p ˃ .05). In conclusion, this study suggested that both short-term and working memories improve across age in children where 12 and 10 years of old are likely the crucial age periods in terms of short-term and working memories development. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=age" title="age">age</a>, <a href="https://publications.waset.org/abstracts/search?q=gender" title=" gender"> gender</a>, <a href="https://publications.waset.org/abstracts/search?q=short-term%20memory" title=" short-term memory"> short-term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=working%20memory" title=" working memory"> working memory</a> </p> <a href="https://publications.waset.org/abstracts/30471/short-term-and-working-memory-differences-across-age-and-gender-in-children" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/30471.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">478</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2944</span> Visual Identity Components of Tourist Destination</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Petra%20Barisic">Petra Barisic</a>, <a href="https://publications.waset.org/abstracts/search?q=Zrinka%20Blazevic"> Zrinka Blazevic</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the world of modern communications, visual identity has predominant influence on the overall success of tourist destinations, but despite of these, the problem of designing thriving tourist destination visual identity and their components are hardly addressed. This study highlights the importance of building and managing the visual identity of tourist destination, and based on the empirical study of well-known Mediterranean destination of Croatia analyses three main components of tourist destination visual identity; name, slogan, and logo. Moreover, the paper shows how respondents perceive each component of Croatia’s visual identity. According to study, logo is the most important, followed by the name and slogan. Research also reveals that Croatian economy lags behind developed countries in understanding the importance of visual identity, and its influence on marketing goal achievements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=components%20of%20visual%20identity" title="components of visual identity">components of visual identity</a>, <a href="https://publications.waset.org/abstracts/search?q=Croatia" title=" Croatia"> Croatia</a>, <a href="https://publications.waset.org/abstracts/search?q=tourist%20destination" title=" tourist destination"> tourist destination</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20identity" title=" visual identity "> visual identity </a> </p> <a href="https://publications.waset.org/abstracts/6602/visual-identity-components-of-tourist-destination" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6602.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1050</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2943</span> Neuropsychological Deficits in Drug-Resistant Epilepsy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Timea%20Harmath-T%C3%A1nczos">Timea Harmath-Tánczos</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Drug-resistant epilepsy (DRE) is defined as the persistence of seizures despite at least two syndrome-adapted antiseizure drugs (ASD) used at efficacious daily doses. About a third of patients with epilepsy suffer from drug resistance. Cognitive assessment has a crucial role in the diagnosis and clinical management of epilepsy. Previous studies have addressed the clinical targets and indications for measuring neuropsychological functions; best to our knowledge, no studies have examined it in a Hungarian therapy-resistant population. To fill this gap, we investigated the Hungarian diagnostic protocol between 18 and 65 years of age. This study aimed to describe and analyze neuropsychological functions in patients with drug-resistant epilepsy and identify factors associated with neuropsychology deficits. We perform a prospective case-control study comparing neuropsychological performances in 50 adult patients and 50 healthy individuals between March 2023 and July 2023. Neuropsychological functions were examined in both patients and controls using a full set of specific tests (general performance level, motor functions, attention, executive facts., verbal and visual memory, language, and visual-spatial functions). Potential risk factors for neuropsychological deficit were assessed in the patient group using a multivariate analysis. The two groups did not differ in age, sex, dominant hand and level of education. Compared with the control group, patients with drug-resistant epilepsy showed worse performance on motor functions and visuospatial memory, sustained attention, inhibition and verbal memory. Neuropsychological deficits could therefore be systematically detected in patients with drug-resistant epilepsy in order to provide neuropsychological therapy and improve quality of life. The analysis of the classical and complex indices of the special neuropsychological tasks presented in the presentation can help in the investigation of normal and disrupted memory and executive functions in the DRE. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=drug-resistant%20epilepsy" title="drug-resistant epilepsy">drug-resistant epilepsy</a>, <a href="https://publications.waset.org/abstracts/search?q=Hungarian%20diagnostic%20protocol" title=" Hungarian diagnostic protocol"> Hungarian diagnostic protocol</a>, <a href="https://publications.waset.org/abstracts/search?q=memory" title=" memory"> memory</a>, <a href="https://publications.waset.org/abstracts/search?q=executive%20functions" title=" executive functions"> executive functions</a>, <a href="https://publications.waset.org/abstracts/search?q=cognitive%20neuropsychology" title=" cognitive neuropsychology"> cognitive neuropsychology</a> </p> <a href="https://publications.waset.org/abstracts/168392/neuropsychological-deficits-in-drug-resistant-epilepsy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/168392.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">76</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20memory&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20memory&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20memory&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20memory&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20memory&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20memory&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20memory&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20memory&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20memory&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20memory&page=99">99</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20memory&page=100">100</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20memory&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>