CINXE.COM

Search results for: visual information processing

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: visual information processing</title> <meta name="description" content="Search results for: visual information processing"> <meta name="keywords" content="visual information processing"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="visual information processing" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="visual information processing"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 15016</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: visual information processing</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15016</span> An Analysis of the Temporal Aspects of Visual Attention Processing Using Rapid Series Visual Processing (RSVP) Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shreya%20Borthakur">Shreya Borthakur</a>, <a href="https://publications.waset.org/abstracts/search?q=Aastha%20Vartak"> Aastha Vartak</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This Electroencephalogram (EEG) project on Rapid Visual Serial Processing (RSVP) paradigm explores the temporal dynamics of visual attention processing in response to rapidly presented visual stimuli. The study builds upon previous research that used real-world images in RSVP tasks to understand the emergence of object representations in the human brain. The objectives of the research include investigating the differences in accuracy and reaction times between 5 Hz and 20 Hz presentation rates, as well as examining the prominent brain waves, particularly alpha and beta waves, associated with the attention task. The pre-processing and data analysis involves filtering EEG data, creating epochs for target stimuli, and conducting statistical tests using MATLAB, EEGLAB, Chronux toolboxes, and R. The results support the hypotheses, revealing higher accuracy at a slower presentation rate, faster reaction times for less complex targets, and the involvement of alpha and beta waves in attention and cognitive processing. This research sheds light on how short-term memory and cognitive control affect visual processing and could have practical implications in fields like education. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=RSVP" title="RSVP">RSVP</a>, <a href="https://publications.waset.org/abstracts/search?q=attention" title=" attention"> attention</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20processing" title=" visual processing"> visual processing</a>, <a href="https://publications.waset.org/abstracts/search?q=attentional%20blink" title=" attentional blink"> attentional blink</a>, <a href="https://publications.waset.org/abstracts/search?q=EEG" title=" EEG"> EEG</a> </p> <a href="https://publications.waset.org/abstracts/169655/an-analysis-of-the-temporal-aspects-of-visual-attention-processing-using-rapid-series-visual-processing-rsvp-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/169655.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">69</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15015</span> Information Processing and Visual Attention: An Eye Tracking Study on Nutrition Labels</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rosa%20Hendijani">Rosa Hendijani</a>, <a href="https://publications.waset.org/abstracts/search?q=Amir%20Ghadimi%20Herfeh"> Amir Ghadimi Herfeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nutrition labels are diet-related health policies. They help individuals improve food-choice decisions and reduce intake of calories and unhealthy food elements, like cholesterol. However, many individuals do not pay attention to nutrition labels or fail to appropriately understand them. According to the literature, thinking and cognitive styles can have significant effects on attention to nutrition labels. According to the author's knowledge, the effect of global/local processing on attention to nutrition labels have not been previously studied. Global/local processing encourages individuals to attend to the whole/specific parts of an object and can have a significant impact on people's visual attention. In this study, this effect was examined with an experimental design using the eye-tracking technique. The research hypothesis was that individuals with local processing would pay more attention to nutrition labels, including nutrition tables and traffic lights. An experiment was designed with two conditions: global and local information processing. Forty participants were randomly assigned to either global or local conditions, and their processing style was manipulated accordingly. Results supported the hypothesis for nutrition tables but not for traffic lights. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=eye-tracking" title="eye-tracking">eye-tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=nutrition%20labelling" title=" nutrition labelling"> nutrition labelling</a>, <a href="https://publications.waset.org/abstracts/search?q=global%2Flocal%20information%20processing" title=" global/local information processing"> global/local information processing</a>, <a href="https://publications.waset.org/abstracts/search?q=individual%20differences" title=" individual differences"> individual differences</a> </p> <a href="https://publications.waset.org/abstracts/132051/information-processing-and-visual-attention-an-eye-tracking-study-on-nutrition-labels" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132051.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15014</span> The Importance of Visual Communication in Artificial Intelligence</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manjitsingh%20Rajput">Manjitsingh Rajput</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual communication plays an important role in artificial intelligence (AI) because it enables machines to understand and interpret visual information, similar to how humans do. This abstract explores the importance of visual communication in AI and emphasizes the importance of various applications such as computer vision, object emphasis recognition, image classification and autonomous systems. In going deeper, with deep learning techniques and neural networks that modify visual understanding, In addition to AI programming, the abstract discusses challenges facing visual interfaces for AI, such as data scarcity, domain optimization, and interpretability. Visual communication and other approaches, such as natural language processing and speech recognition, have also been explored. Overall, this abstract highlights the critical role that visual communication plays in advancing AI capabilities and enabling machines to perceive and understand the world around them. The abstract also explores the integration of visual communication with other modalities like natural language processing and speech recognition, emphasizing the critical role of visual communication in AI capabilities. This methodology explores the importance of visual communication in AI development and implementation, highlighting its potential to enhance the effectiveness and accessibility of AI systems. It provides a comprehensive approach to integrating visual elements into AI systems, making them more user-friendly and efficient. In conclusion, Visual communication is crucial in AI systems for object recognition, facial analysis, and augmented reality, but challenges like data quality, interpretability, and ethics must be addressed. Visual communication enhances user experience, decision-making, accessibility, and collaboration. Developers can integrate visual elements for efficient and accessible AI systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20communication%20AI" title="visual communication AI">visual communication AI</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20aid%20in%20communication" title=" visual aid in communication"> visual aid in communication</a>, <a href="https://publications.waset.org/abstracts/search?q=essence%20of%20visual%20communication." title=" essence of visual communication."> essence of visual communication.</a> </p> <a href="https://publications.waset.org/abstracts/174998/the-importance-of-visual-communication-in-artificial-intelligence" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174998.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15013</span> Voice Signal Processing and Coding in MATLAB Generating a Plasma Signal in a Tesla Coil for a Security System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Juan%20Jimenez">Juan Jimenez</a>, <a href="https://publications.waset.org/abstracts/search?q=Erika%20Yambay"> Erika Yambay</a>, <a href="https://publications.waset.org/abstracts/search?q=Dayana%20Pilco"> Dayana Pilco</a>, <a href="https://publications.waset.org/abstracts/search?q=Brayan%20Parra"> Brayan Parra</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents an investigation of voice signal processing and coding using MATLAB, with the objective of generating a plasma signal on a Tesla coil within a security system. The approach focuses on using advanced voice signal processing techniques to encode and modulate the audio signal, which is then amplified and applied to a Tesla coil. The result is the creation of a striking visual effect of voice-controlled plasma with specific applications in security systems. The article explores the technical aspects of voice signal processing, the generation of the plasma signal, and its relationship to security. The implications and creative potential of this technology are discussed, highlighting its relevance at the forefront of research in signal processing and visual effect generation in the field of security systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=voice%20signal%20processing" title="voice signal processing">voice signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20signal%20coding" title=" voice signal coding"> voice signal coding</a>, <a href="https://publications.waset.org/abstracts/search?q=MATLAB" title=" MATLAB"> MATLAB</a>, <a href="https://publications.waset.org/abstracts/search?q=plasma%20signal" title=" plasma signal"> plasma signal</a>, <a href="https://publications.waset.org/abstracts/search?q=Tesla%20coil" title=" Tesla coil"> Tesla coil</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20system" title=" security system"> security system</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20effects" title=" visual effects"> visual effects</a>, <a href="https://publications.waset.org/abstracts/search?q=audiovisual%20interaction" title=" audiovisual interaction"> audiovisual interaction</a> </p> <a href="https://publications.waset.org/abstracts/170828/voice-signal-processing-and-coding-in-matlab-generating-a-plasma-signal-in-a-tesla-coil-for-a-security-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/170828.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">92</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15012</span> Visual Analytics of Higher Order Information for Trajectory Datasets</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ye%20Wang">Ye Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ickjai%20Lee"> Ickjai Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the widespread of mobile sensing, there is a strong need to handle trails of moving objects, trajectories. This paper proposes three visual analytic approaches for higher order information of trajectory data sets based on the higher order Voronoi diagram data structure. Proposed approaches reveal geometrical information, topological, and directional information. Experimental results demonstrate the applicability and usefulness of proposed three approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20analytics" title="visual analytics">visual analytics</a>, <a href="https://publications.waset.org/abstracts/search?q=higher%20order%20information" title=" higher order information"> higher order information</a>, <a href="https://publications.waset.org/abstracts/search?q=trajectory%20datasets" title=" trajectory datasets"> trajectory datasets</a>, <a href="https://publications.waset.org/abstracts/search?q=spatio-temporal%20data" title=" spatio-temporal data"> spatio-temporal data</a> </p> <a href="https://publications.waset.org/abstracts/2630/visual-analytics-of-higher-order-information-for-trajectory-datasets" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2630.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">402</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15011</span> An Ultrasonic Signal Processing System for Tomographic Imaging of Reinforced Concrete Structures</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Edwin%20Forero-Garcia">Edwin Forero-Garcia</a>, <a href="https://publications.waset.org/abstracts/search?q=Jaime%20Vitola"> Jaime Vitola</a>, <a href="https://publications.waset.org/abstracts/search?q=Brayan%20Cardenas"> Brayan Cardenas</a>, <a href="https://publications.waset.org/abstracts/search?q=Johan%20Casagua"> Johan Casagua</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research article presents the integration of electronic and computer systems, which developed an ultrasonic signal processing system that performs the capture, adaptation, and analog-digital conversion to later carry out its processing and visualization. The capture and adaptation of the signal were carried out from the design and implementation of an analog electronic system distributed in stages: 1. Coupling of impedances; 2. Analog filter; 3. Signal amplifier. After the signal conditioning was carried out, the ultrasonic information was digitized using a digital microcontroller to carry out its respective processing. The digital processing of the signals was carried out in MATLAB software for the elaboration of A-Scan, B and D-Scan types of ultrasonic images. Then, advanced processing was performed using the SAFT technique to improve the resolution of the Scan-B-type images. Thus, the information from the ultrasonic images was displayed in a user interface developed in .Net with Visual Studio. For the validation of the system, ultrasonic signals were acquired, and in this way, the non-invasive inspection of the structures was carried out and thus able to identify the existing pathologies in them. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=acquisition" title="acquisition">acquisition</a>, <a href="https://publications.waset.org/abstracts/search?q=signal%20processing" title=" signal processing"> signal processing</a>, <a href="https://publications.waset.org/abstracts/search?q=ultrasound" title=" ultrasound"> ultrasound</a>, <a href="https://publications.waset.org/abstracts/search?q=SAFT" title=" SAFT"> SAFT</a>, <a href="https://publications.waset.org/abstracts/search?q=HMI" title=" HMI"> HMI</a> </p> <a href="https://publications.waset.org/abstracts/162674/an-ultrasonic-signal-processing-system-for-tomographic-imaging-of-reinforced-concrete-structures" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162674.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">107</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15010</span> The Contemporary Visual Spectacle: Critical Visual Literacy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lai-Fen%20Yang">Lai-Fen Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this increasingly visual world, how can we best decipher and understand the many ways that our everyday lives are organized around looking practices and the many images we encounter each day? Indeed, how we interact with and interpret visual images is a basic component of human life. Today, however, we are living in one of the most artificial visual and image-saturated cultures in human history, which makes understanding the complex construction and multiple social functions of visual imagery more important than ever before. Themes regarding our experience of a visually pervasive mediated culture, here, termed visual spectacle. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20culture" title="visual culture">visual culture</a>, <a href="https://publications.waset.org/abstracts/search?q=contemporary" title=" contemporary"> contemporary</a>, <a href="https://publications.waset.org/abstracts/search?q=images" title=" images"> images</a>, <a href="https://publications.waset.org/abstracts/search?q=literacy" title=" literacy"> literacy</a> </p> <a href="https://publications.waset.org/abstracts/9045/the-contemporary-visual-spectacle-critical-visual-literacy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9045.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">513</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15009</span> IoT Based Information Processing and Computing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mannan%20Ahmad%20Rasheed">Mannan Ahmad Rasheed</a>, <a href="https://publications.waset.org/abstracts/search?q=Sawera%20Kanwal"> Sawera Kanwal</a>, <a href="https://publications.waset.org/abstracts/search?q=Mansoor%20Ahmad%20Rasheed"> Mansoor Ahmad Rasheed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Internet of Things (IoT) has revolutionized the way we collect and process information, making it possible to gather data from a wide range of connected devices and sensors. This has led to the development of IoT-based information processing and computing systems that are capable of handling large amounts of data in real time. This paper provides a comprehensive overview of the current state of IoT-based information processing and computing, as well as the key challenges and gaps that need to be addressed. This paper discusses the potential benefits of IoT-based information processing and computing, such as improved efficiency, enhanced decision-making, and cost savings. Despite the numerous benefits of IoT-based information processing and computing, several challenges need to be addressed to realize the full potential of these systems. These challenges include security and privacy concerns, interoperability issues, scalability and reliability of IoT devices, and the need for standardization and regulation of IoT technologies. Moreover, this paper identifies several gaps in the current research related to IoT-based information processing and computing. One major gap is the lack of a comprehensive framework for designing and implementing IoT-based information processing and computing systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=IoT" title="IoT">IoT</a>, <a href="https://publications.waset.org/abstracts/search?q=computing" title=" computing"> computing</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20processing" title=" information processing"> information processing</a>, <a href="https://publications.waset.org/abstracts/search?q=Iot%20computing" title=" Iot computing"> Iot computing</a> </p> <a href="https://publications.waset.org/abstracts/165683/iot-based-information-processing-and-computing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165683.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">187</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15008</span> Binocular Heterogeneity in Saccadic Suppression</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Evgeny%20Kozubenko">Evgeny Kozubenko</a>, <a href="https://publications.waset.org/abstracts/search?q=Dmitry%20Shaposhnikov"> Dmitry Shaposhnikov</a>, <a href="https://publications.waset.org/abstracts/search?q=Mikhail%20Petrushan"> Mikhail Petrushan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work is focused on the study of the binocular characteristics of the phenomenon of perisaccadic suppression in humans when perceiving visual objects. This phenomenon manifests in a decrease in the subject's ability to perceive visual information during saccades, which play an important role in purpose-driven behavior and visual perception. It was shown that the impairment of perception of visual information in the post-saccadic time window is stronger (p < 0.05) in the ipsilateral eye (the eye towards which the saccade occurs). In addition, the observed heterogeneity of post-saccadic suppression in the contralateral and ipsilateral eyes may relate to depth perception. Taking the studied phenomenon into account is important when developing ergonomic control panels in modern operator systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=eye%20movement" title="eye movement">eye movement</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20vision" title=" natural vision"> natural vision</a>, <a href="https://publications.waset.org/abstracts/search?q=saccadic%20suppression" title=" saccadic suppression"> saccadic suppression</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perception" title=" visual perception"> visual perception</a> </p> <a href="https://publications.waset.org/abstracts/137677/binocular-heterogeneity-in-saccadic-suppression" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137677.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">156</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15007</span> Local Image Features Emerging from Brain Inspired Multi-Layer Neural Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hui%20Wei">Hui Wei</a>, <a href="https://publications.waset.org/abstracts/search?q=Zheng%20Dong"> Zheng Dong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Object recognition has long been a challenging task in computer vision. Yet the human brain, with the ability to rapidly and accurately recognize visual stimuli, manages this task effortlessly. In the past decades, advances in neuroscience have revealed some neural mechanisms underlying visual processing. In this paper, we present a novel model inspired by the visual pathway in primate brains. This multi-layer neural network model imitates the hierarchical convergent processing mechanism in the visual pathway. We show that local image features generated by this model exhibit robust discrimination and even better generalization ability compared with some existing image descriptors. We also demonstrate the application of this model in an object recognition task on image data sets. The result provides strong support for the potential of this model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biological%20model" title="biological model">biological model</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-layer%20neural%20network" title=" multi-layer neural network"> multi-layer neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20recognition" title=" object recognition"> object recognition</a> </p> <a href="https://publications.waset.org/abstracts/25221/local-image-features-emerging-from-brain-inspired-multi-layer-neural-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/25221.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">542</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15006</span> Applications of Visual Ethnography in Public Anthropology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Subramaniam%20Panneerselvam">Subramaniam Panneerselvam</a>, <a href="https://publications.waset.org/abstracts/search?q=Gunanithi%20Perumal"> Gunanithi Perumal</a>, <a href="https://publications.waset.org/abstracts/search?q=KP%20Subin"> KP Subin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Visual Ethnography is used to document the culture of a community through a visual means. It could be either photography or audio-visual documentation. The visual ethnographic techniques are widely used in visual anthropology. The visual anthropologists use the camera to capture the cultural image of the studied community. There is a scope for subjectivity while the culture is documented by an external person. But the upcoming of the public anthropology provides an opportunity for the participants to document their own culture. There is a need to equip the participants with the skill of doing visual ethnography. The mobile phone technology provides visual documentation facility to everyone to capture the moments instantly. The visual ethnography facilitates the multiple-interpretation for the audiences. This study explores the effectiveness of visual ethnography among the tribal youth through public anthropology perspective. The case study was conducted to equip the tribal youth of Nilgiris in visual ethnography and the outcome of the experiment shared in this paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20ethnography" title="visual ethnography">visual ethnography</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20anthropology" title=" visual anthropology"> visual anthropology</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20anthropology" title=" public anthropology"> public anthropology</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple-interpretation" title=" multiple-interpretation"> multiple-interpretation</a>, <a href="https://publications.waset.org/abstracts/search?q=case%20study" title=" case study"> case study</a> </p> <a href="https://publications.waset.org/abstracts/127577/applications-of-visual-ethnography-in-public-anthropology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127577.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">183</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15005</span> The Analogy of Visual Arts and Visual Literacy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lindelwa%20Pepu">Lindelwa Pepu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual Arts and Visual Literacy are defined with distinction from one another. Visual Arts are known for art forms such as drawing, painting, and photography, just to name a few. At the same time, Visual Literacy is known for learning through images. The Visual Literacy phenomenon may be attributed to the use of images was first established for creating memories and enjoyment. As time evolved, images became the center and essential means of making contact between people. Gradually, images became a means for interpreting and understanding words through visuals, that being Visual Arts. The purpose of this study is to present the analogy of the two terms Visual Arts and Visual Literacy, which are defined and compared through early practicing visual artists as well as relevant researchers to reveal how they interrelate with one another. This is a qualitative study that uses an interpretive approach as it seeks to understand and explain the interest of the study. The results reveal correspondence of the analogy between the two terms through various writers of early and recent years. This study recommends the significance of the two terms and the role they play in relation to other fields of study. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20arts" title="visual arts">visual arts</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20literacy" title=" visual literacy"> visual literacy</a>, <a href="https://publications.waset.org/abstracts/search?q=pictures" title=" pictures"> pictures</a>, <a href="https://publications.waset.org/abstracts/search?q=images" title=" images"> images</a> </p> <a href="https://publications.waset.org/abstracts/165940/the-analogy-of-visual-arts-and-visual-literacy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165940.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15004</span> A Comparative Study of Global Power Grids and Global Fossil Energy Pipelines Using GIS Technology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wenhao%20Wang">Wenhao Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xinzhi%20Xu"> Xinzhi Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=Limin%20Feng"> Limin Feng</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei%20Cong"> Wei Cong</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper comprehensively investigates current development status of global power grids and fossil energy pipelines (oil and natural gas), proposes a standard visual platform of global power and fossil energy based on Geographic Information System (GIS) technology. In this visual platform, a series of systematic visual models is proposed with global spatial data, systematic energy and power parameters. Under this visual platform, the current Global Power Grids Map and Global Fossil Energy Pipelines Map are plotted within more than 140 countries and regions across the world. Using the multi-scale fusion data processing and modeling methods, the world&rsquo;s global fossil energy pipelines and power grids information system basic database is established, which provides important data supporting global fossil energy and electricity research. Finally, through the systematic and comparative study of global fossil energy pipelines and global power grids, the general status of global fossil energy and electricity development are reviewed, and energy transition in key areas are evaluated and analyzed. Through the comparison analysis of fossil energy and clean energy, the direction of relevant research is pointed out for clean development and energy transition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=energy%20transition" title="energy transition">energy transition</a>, <a href="https://publications.waset.org/abstracts/search?q=geographic%20information%20system" title=" geographic information system"> geographic information system</a>, <a href="https://publications.waset.org/abstracts/search?q=fossil%20energy" title=" fossil energy"> fossil energy</a>, <a href="https://publications.waset.org/abstracts/search?q=power%20systems" title=" power systems"> power systems</a> </p> <a href="https://publications.waset.org/abstracts/120933/a-comparative-study-of-global-power-grids-and-global-fossil-energy-pipelines-using-gis-technology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/120933.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">150</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15003</span> Instructional Consequences of the Transiency of Spoken Words </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Slava%20Kalyuga">Slava Kalyuga</a>, <a href="https://publications.waset.org/abstracts/search?q=Sujanya%20Sombatteera"> Sujanya Sombatteera </a> </p> <p class="card-text"><strong>Abstract:</strong></p> In multimedia learning, written text is often transformed into spoken (narrated) text. This transient information may overwhelm limited processing capacity of working memory and inhibit learning instead of improving it. The paper reviews recent empirical studies in modality and verbal redundancy effects within a cognitive load framework and outlines conditions under which negative effects of transiency may occur. According to the modality effect, textual information accompanying pictures should be presented in an auditory rather than visual form in order to engage two available channels of working memory – auditory and visual - instead of only one of them. However, some studies failed to replicate the modality effect and found differences opposite to those expected. Also, according to the multimedia redundancy effect, the same information should not be presented simultaneously in different modalities to avoid unnecessary cognitive load imposed by the integration of redundant sources of information. However, a few studies failed to replicate the multimedia redundancy effect too. Transiency of information is used to explain these controversial results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cognitive%20load" title="cognitive load">cognitive load</a>, <a href="https://publications.waset.org/abstracts/search?q=transient%20information" title=" transient information"> transient information</a>, <a href="https://publications.waset.org/abstracts/search?q=modality%20effect" title=" modality effect"> modality effect</a>, <a href="https://publications.waset.org/abstracts/search?q=verbal%20redundancy%20effect" title=" verbal redundancy effect"> verbal redundancy effect</a> </p> <a href="https://publications.waset.org/abstracts/14122/instructional-consequences-of-the-transiency-of-spoken-words" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14122.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">380</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15002</span> Digital Watermarking Based on Visual Cryptography and Histogram</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20Rama%20Kishore">R. Rama Kishore</a>, <a href="https://publications.waset.org/abstracts/search?q=Sunesh"> Sunesh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nowadays, robust and secure watermarking algorithm and its optimization have been need of the hour. A watermarking algorithm is presented to achieve the copy right protection of the owner based on visual cryptography, histogram shape property and entropy. In this, both host image and watermark are preprocessed. Host image is preprocessed by using Butterworth filter, and watermark is with visual cryptography. Applying visual cryptography on water mark generates two shares. One share is used for embedding the watermark, and the other one is used for solving any dispute with the aid of trusted authority. Usage of histogram shape makes the process more robust against geometric and signal processing attacks. The combination of visual cryptography, Butterworth filter, histogram, and entropy can make the algorithm more robust, imperceptible, and copy right protection of the owner. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=digital%20watermarking" title="digital watermarking">digital watermarking</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20cryptography" title=" visual cryptography"> visual cryptography</a>, <a href="https://publications.waset.org/abstracts/search?q=histogram" title=" histogram"> histogram</a>, <a href="https://publications.waset.org/abstracts/search?q=butter%20worth%20filter" title=" butter worth filter"> butter worth filter</a> </p> <a href="https://publications.waset.org/abstracts/48320/digital-watermarking-based-on-visual-cryptography-and-histogram" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/48320.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">357</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15001</span> A Visual Inspection System for Automotive Sheet Metal Chasis Parts Produced with Cold-Forming Method</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=I%CC%87mren%20%C3%96zt%C3%BCrk%20Y%C4%B1lmaz">İmren Öztürk Yılmaz</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdullah%20Yasin%20Bilici"> Abdullah Yasin Bilici</a>, <a href="https://publications.waset.org/abstracts/search?q=Yasin%20Atalay%20Candemir"> Yasin Atalay Candemir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The system consists of 4 main elements: motion system, image acquisition system, image processing software, and control interface. The parts coming out of the production line to enter the image processing system with the conveyor belt at the end of the line. The 3D scanning of the produced part is performed with the laser scanning system integrated into the system entry side. With the 3D scanning method, it is determined at what position and angle the parts enter the system, and according to the data obtained, parameters such as part origin and conveyor speed are calculated with the designed software, and the robot is informed about the position where it will take part. The robot, which receives the information, takes the produced part on the belt conveyor and shows it to high-resolution cameras for quality control. Measurement processes are carried out with a maximum error of 20 microns determined by the experiments. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=quality%20control" title="quality control">quality control</a>, <a href="https://publications.waset.org/abstracts/search?q=industry%204.0" title=" industry 4.0"> industry 4.0</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=automated%20fault%20detection" title=" automated fault detection"> automated fault detection</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20visual%20inspection" title=" digital visual inspection"> digital visual inspection</a> </p> <a href="https://publications.waset.org/abstracts/161231/a-visual-inspection-system-for-automotive-sheet-metal-chasis-parts-produced-with-cold-forming-method" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161231.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">113</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">15000</span> Artificial Generation of Visual Evoked Potential to Enhance Visual Ability</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Vani">A. Vani</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20N.%20Mamatha"> M. N. Mamatha </a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual signal processing in human beings occurs in the occipital lobe of the brain. The signals that are generated in the brain are universal for all the human beings and they are called Visual Evoked Potential (VEP). Generally, the visually impaired people lose sight because of severe damage to only the eyes natural photo sensors, but the occipital lobe will still be functioning. In this paper, a technique of artificially generating VEP is proposed to enhance the visual ability of the subject. The system uses the electrical photoreceptors to capture image, process the image, to detect and recognize the subject or object. This voltage is further processed and can transmit wirelessly to a BIOMEMS implanted into occipital lobe of the patient&rsquo;s brain. The proposed BIOMEMS consists of array of electrodes that generate the neuron potential which is similar to VEP of normal people. Thus, the neurons get the visual data from the BioMEMS which helps in generating partial vision or sight for the visually challenged patient.&nbsp; <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=BioMEMS" title="BioMEMS">BioMEMS</a>, <a href="https://publications.waset.org/abstracts/search?q=neuro-prosthetic" title=" neuro-prosthetic"> neuro-prosthetic</a>, <a href="https://publications.waset.org/abstracts/search?q=openvibe" title=" openvibe"> openvibe</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20evoked%20potential" title=" visual evoked potential"> visual evoked potential</a> </p> <a href="https://publications.waset.org/abstracts/51396/artificial-generation-of-visual-evoked-potential-to-enhance-visual-ability" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/51396.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">315</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14999</span> Game Space Program: Therapy for Children with Autism Spectrum Disorder</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Khodijah%20Salimah">Khodijah Salimah</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Game Space Program is the program design and development game for therapy the autistic child who had problems with sensory processing and integration. This program is the basic for game space to expand treatment therapy in many areas to help autistic's ability to think through visual perception. This problem can be treated with sensory experience and integration with visual experience to learn how to think and how to learn with visual perception. This perception can be accommodated through an understanding of visual thinking received from sensory exist in game space as virtual healthcare facilities are adjusted based on the sensory needs of children with autism. This paper aims to analyze the potential of virtual visual thinking for treatment autism with the game space program. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autism" title="autism">autism</a>, <a href="https://publications.waset.org/abstracts/search?q=game%20space%20program" title=" game space program"> game space program</a>, <a href="https://publications.waset.org/abstracts/search?q=sensory" title=" sensory"> sensory</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20healthcare%20facilities" title=" virtual healthcare facilities"> virtual healthcare facilities</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perception" title=" visual perception"> visual perception</a> </p> <a href="https://publications.waset.org/abstracts/55198/game-space-program-therapy-for-children-with-autism-spectrum-disorder" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/55198.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">314</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14998</span> The Impact of Artificial Intelligence on Food Nutrition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Antonyous%20Fawzy%20Boshra%20Girgis">Antonyous Fawzy Boshra Girgis</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nutrition labels are diet-related health policies. They help individuals improve food-choice decisions and reduce intake of calories and unhealthy food elements, like cholesterol. However, many individuals do not pay attention to nutrition labels or fail to appropriately understand them. According to the literature, thinking and cognitive styles can have significant effects on attention to nutrition labels. According to the author's knowledge, the effect of global/local processing on attention to nutrition labels has not been previously studied. Global/local processing encourages individuals to attend to the whole/specific parts of an object and can have a significant impact on people's visual attention. In this study, this effect was examined with an experimental design using the eye-tracking technique. The research hypothesis was that individuals with local processing would pay more attention to nutrition labels, including nutrition tables and traffic lights. An experiment was designed with two conditions: global and local information processing. Forty participants were randomly assigned to either global or local conditions, and their processing style was manipulated accordingly. Results supported the hypothesis for nutrition tables but not for traffic lights. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nutrition" title="nutrition">nutrition</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20health" title=" public health"> public health</a>, <a href="https://publications.waset.org/abstracts/search?q=SA%20Harvest" title=" SA Harvest"> SA Harvest</a>, <a href="https://publications.waset.org/abstracts/search?q=foodeye-tracking" title=" foodeye-tracking"> foodeye-tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=nutrition%20labelling" title=" nutrition labelling"> nutrition labelling</a>, <a href="https://publications.waset.org/abstracts/search?q=global%2Flocal%20information%20processing" title=" global/local information processing"> global/local information processing</a>, <a href="https://publications.waset.org/abstracts/search?q=individual%20differencesmobile%20computing" title=" individual differencesmobile computing"> individual differencesmobile computing</a>, <a href="https://publications.waset.org/abstracts/search?q=cloud%20computing" title=" cloud computing"> cloud computing</a>, <a href="https://publications.waset.org/abstracts/search?q=nutrition%20label%20use" title=" nutrition label use"> nutrition label use</a>, <a href="https://publications.waset.org/abstracts/search?q=nutrition%20management" title=" nutrition management"> nutrition management</a>, <a href="https://publications.waset.org/abstracts/search?q=barcode%20scanning" title=" barcode scanning"> barcode scanning</a> </p> <a href="https://publications.waset.org/abstracts/188882/the-impact-of-artificial-intelligence-on-food-nutrition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188882.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">40</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14997</span> Visual Thing Recognition with Binary Scale-Invariant Feature Transform and Support Vector Machine Classifiers Using Color Information</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei-Jong%20Yang">Wei-Jong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei-Hau%20Du"> Wei-Hau Du</a>, <a href="https://publications.waset.org/abstracts/search?q=Pau-Choo%20Chang"> Pau-Choo Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jar-Ferr%20Yang"> Jar-Ferr Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Pi-Hsia%20Hung"> Pi-Hsia Hung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The demands of smart visual thing recognition in various devices have been increased rapidly for daily smart production, living and learning systems in recent years. This paper proposed a visual thing recognition system, which combines binary scale-invariant feature transform (SIFT), bag of words model (BoW), and support vector machine (SVM) by using color information. Since the traditional SIFT features and SVM classifiers only use the gray information, color information is still an important feature for visual thing recognition. With color-based SIFT features and SVM, we can discard unreliable matching pairs and increase the robustness of matching tasks. The experimental results show that the proposed object recognition system with color-assistant SIFT SVM classifier achieves higher recognition rate than that with the traditional gray SIFT and SVM classification in various situations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20moments" title="color moments">color moments</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20thing%20recognition%20system" title=" visual thing recognition system"> visual thing recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20SIFT" title=" color SIFT"> color SIFT</a> </p> <a href="https://publications.waset.org/abstracts/62857/visual-thing-recognition-with-binary-scale-invariant-feature-transform-and-support-vector-machine-classifiers-using-color-information" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62857.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">467</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14996</span> A Comparison of Anger State and Trait Anger Among Adolescents with and without Visual Impairment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sehmus%20Aslan">Sehmus Aslan</a>, <a href="https://publications.waset.org/abstracts/search?q=Sibel%20Karacaoglu"> Sibel Karacaoglu</a>, <a href="https://publications.waset.org/abstracts/search?q=Cengiz%20Sevgin"> Cengiz Sevgin</a>, <a href="https://publications.waset.org/abstracts/search?q=Ummuhan%20Bas%20Aslan"> Ummuhan Bas Aslan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objective: Anger expression style is an important moderator of the effects on the person and person’s environment. Anger and anger expression have become important constructs in identifying individuals at high risk for psychological difficulties. To our knowledge, there is no information about anger and anger expression of adolescents with visual impairment. The aim of this study was to compare anger and anger expression among adolescents with and without visual impairment. Methods: Thirty-eight adolescents with visual impairment (18 female, 20 male) and 44 adolescents without visual impairment (22 female, 24 male), in totally 84 adolescents aged between 12 to 15 years, participated in the study. Anger and anger expression of the participants assessed with The State-Trait Anger Scale (STAS). STAS, a self-report questionnaire, is designed to measure the experience and expression of anger. STAS has four subtitles including continuous anger, anger in, anger out and anger control. Reliability and validity of the STAS have been well established among adolescents. Mann-Whitney U Test was used for statistical analysis. Results: No significant differences were found in the scores of continuous anger and anger out between adolescents with and without visual impairment (p < 0.05). On the other hand, there were differences in scores of anger control and anger in between adolescents with and without visual impairment (p>0.05). The score of anger control in adolescents with visual impairment were higher compared with adolescents without visual impairment. Meanwhile, the adolescents with visual impairment had lower score for anger in compared with adolescents without visual impairment. Conclusions: The results of this study suggest that there is no difference in anger level among adolescents with and without visual impairment meanwhile there is difference in anger expression. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adolescent" title="adolescent">adolescent</a>, <a href="https://publications.waset.org/abstracts/search?q=anger" title=" anger"> anger</a>, <a href="https://publications.waset.org/abstracts/search?q=impaired" title=" impaired"> impaired</a>, <a href="https://publications.waset.org/abstracts/search?q=visual" title=" visual"> visual</a> </p> <a href="https://publications.waset.org/abstracts/62109/a-comparison-of-anger-state-and-trait-anger-among-adolescents-with-and-without-visual-impairment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62109.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">413</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14995</span> Enhanced Visual Sharing Method for Medical Image Security</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kalaivani%20Pachiappan">Kalaivani Pachiappan</a>, <a href="https://publications.waset.org/abstracts/search?q=Sabari%20Annaji"> Sabari Annaji</a>, <a href="https://publications.waset.org/abstracts/search?q=Nithya%20Jayakumar"> Nithya Jayakumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, Information security has emerged as foremost challenges in many fields. Especially in medical information systems security is a major issue, in handling reports such as patients’ diagnosis and medical images. These sensitive data require confidentiality for transmission purposes. Image sharing is a secure and fault-tolerant method for protecting digital images, which can use the cryptography techniques to reduce the information loss. In this paper, visual sharing method is proposed which embeds the patient’s details into a medical image. Then the medical image can be divided into numerous shared images and protected by various users. The original patient details and medical image can be retrieved by gathering the shared images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=information%20security" title="information security">information security</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20images" title=" medical images"> medical images</a>, <a href="https://publications.waset.org/abstracts/search?q=cryptography" title=" cryptography"> cryptography</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20sharing" title=" visual sharing"> visual sharing</a> </p> <a href="https://publications.waset.org/abstracts/2990/enhanced-visual-sharing-method-for-medical-image-security" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2990.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">414</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14994</span> The Impact of Online Learning on Visual Learners</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ani%20Demetrashvili">Ani Demetrashvili</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As online learning continues to reshape the landscape of education, questions arise regarding its efficacy for diverse learning styles, particularly for visual learners. This abstract delves into the impact of online learning on visual learners, exploring how digital mediums influence their educational experience and how educational platforms can be optimized to cater to their needs. Visual learners comprise a significant portion of the student population, characterized by their preference for visual aids such as diagrams, charts, and videos to comprehend and retain information. Traditional classroom settings often struggle to accommodate these learners adequately, relying heavily on auditory and written forms of instruction. The advent of online learning presents both opportunities and challenges in addressing the needs of visual learners. Online learning platforms offer a plethora of multimedia resources, including interactive simulations, virtual labs, and video lectures, which align closely with the preferences of visual learners. These platforms have the potential to enhance engagement, comprehension, and retention by presenting information in visually stimulating formats. However, the effectiveness of online learning for visual learners hinges on various factors, including the design of learning materials, user interface, and instructional strategies. Research into the impact of online learning on visual learners encompasses a multidisciplinary approach, drawing from fields such as cognitive psychology, education, and human-computer interaction. Studies employ qualitative and quantitative methods to assess visual learners' preferences, cognitive processes, and learning outcomes in online environments. Surveys, interviews, and observational studies provide insights into learners' preferences for specific types of multimedia content and interactive features. Cognitive tasks, such as memory recall and concept mapping, shed light on the cognitive mechanisms underlying learning in digital settings. Eye-tracking studies offer valuable data on attentional patterns and information processing during online learning activities. The findings from research on the impact of online learning on visual learners have significant implications for educational practice and technology design. Educators and instructional designers can use insights from this research to create more engaging and effective learning materials for visual learners. Strategies such as incorporating visual cues, providing interactive activities, and scaffolding complex concepts with multimedia resources can enhance the learning experience for visual learners in online environments. Moreover, online learning platforms can leverage the findings to improve their user interface and features, making them more accessible and inclusive for visual learners. Customization options, adaptive learning algorithms, and personalized recommendations based on learners' preferences and performance can enhance the usability and effectiveness of online platforms for visual learners. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=online%20learning" title="online learning">online learning</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20learners" title=" visual learners"> visual learners</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20education" title=" digital education"> digital education</a>, <a href="https://publications.waset.org/abstracts/search?q=technology%20in%20learning" title=" technology in learning"> technology in learning</a> </p> <a href="https://publications.waset.org/abstracts/187032/the-impact-of-online-learning-on-visual-learners" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/187032.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">38</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14993</span> Visual Identity Components of Tourist Destination</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Petra%20Barisic">Petra Barisic</a>, <a href="https://publications.waset.org/abstracts/search?q=Zrinka%20Blazevic"> Zrinka Blazevic</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the world of modern communications, visual identity has predominant influence on the overall success of tourist destinations, but despite of these, the problem of designing thriving tourist destination visual identity and their components are hardly addressed. This study highlights the importance of building and managing the visual identity of tourist destination, and based on the empirical study of well-known Mediterranean destination of Croatia analyses three main components of tourist destination visual identity; name, slogan, and logo. Moreover, the paper shows how respondents perceive each component of Croatia’s visual identity. According to study, logo is the most important, followed by the name and slogan. Research also reveals that Croatian economy lags behind developed countries in understanding the importance of visual identity, and its influence on marketing goal achievements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=components%20of%20visual%20identity" title="components of visual identity">components of visual identity</a>, <a href="https://publications.waset.org/abstracts/search?q=Croatia" title=" Croatia"> Croatia</a>, <a href="https://publications.waset.org/abstracts/search?q=tourist%20destination" title=" tourist destination"> tourist destination</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20identity" title=" visual identity "> visual identity </a> </p> <a href="https://publications.waset.org/abstracts/6602/visual-identity-components-of-tourist-destination" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6602.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1050</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14992</span> Visual and Verbal Imagination in a Bilingual Context</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Erzsebet%20Gulyas">Erzsebet Gulyas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Our inner world, our imagination, and our way of thinking are invisible and inaudible to others, but they influence our behavior. To investigate the relationship between thinking and language use, we created a test in Hungarian using ideas from the literature. The test prompts participants to make decisions based on visual images derived from the written information presented. There is a correlation (r=0.5) between the test result and the self-assessment of the visual imagery vividness and the visual and verbal components of internal representations measured by self-report questionnaires, as well as with responses to language-use inquiries in the background questionnaire. 56 university students completed the tests, and SPSS was used to analyze the data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=imagination" title="imagination">imagination</a>, <a href="https://publications.waset.org/abstracts/search?q=internal%20representations" title=" internal representations"> internal representations</a>, <a href="https://publications.waset.org/abstracts/search?q=verbalization" title=" verbalization"> verbalization</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a> </p> <a href="https://publications.waset.org/abstracts/182122/visual-and-verbal-imagination-in-a-bilingual-context" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182122.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">54</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14991</span> To Estimate the Association between Visual Stress and Visual Perceptual Skills</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vijay%20Reena%20Durai">Vijay Reena Durai</a>, <a href="https://publications.waset.org/abstracts/search?q=Krithica%20Srinivasan"> Krithica Srinivasan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: The two fundamental skills involved in the growth and wellbeing of any child can be categorized into visual motor and perceptual skills. Visual stress is a disorder which is characterized by visual discomfort, blurred vision, misspelling words, skipping lines, letters bunching together. There is a need to understand the deficits in perceptual skills among children with visual stress. Aim: To estimate the association between visual stress and visual perceptual skills Objective: To compare visual perceptual skills of children with and without visual stress Methodology: Children between 8 to 15 years of age participated in this cross-sectional study. All children with monocular visual acuity better than or equal to 6/6 were included. Visual perceptual skills were measured using test for visual perceptual skills (TVPS) tool. Reading speed was measured with the chosen colored overlay using Wilkins reading chart and pattern glare score was estimated using a 3cpd gratings. Visual stress was defined as change in reading speed of greater than or equal to 10% and a pattern glare score of greater than or equal to 4. Results: 252 children participated in this study and the male: female ratio of 3:2. Majority of the children preferred Magenta (28%) and Yellow (25%) colored overlay for reading. There was a significant difference between the two groups (MD=1.24±0.6) (p<0.04, 95% CI 0.01-2.43) only in the sequential memory skills. The prevalence of visual stress in this group was found to be 31% (n=78). Binary logistic regression showed that odds ratio of having poor visual perceptual skills was OR: 2.85 (95% CI 1.08-7.49) among children with visual stress. Conclusion: Children with visual stress are found to have three times poorer visual perceptual skills than children without visual stress. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20stress" title="visual stress">visual stress</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perceptual%20skills" title=" visual perceptual skills"> visual perceptual skills</a>, <a href="https://publications.waset.org/abstracts/search?q=colored%20overlay" title=" colored overlay"> colored overlay</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20glare" title=" pattern glare"> pattern glare</a> </p> <a href="https://publications.waset.org/abstracts/41580/to-estimate-the-association-between-visual-stress-and-visual-perceptual-skills" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">388</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14990</span> The Processing of Implicit Stereotypes in Everyday Scene Perception</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Magali%20Mari">Magali Mari</a>, <a href="https://publications.waset.org/abstracts/search?q=Fabrice%20Clement"> Fabrice Clement</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present study investigated the influence of implicit stereotypes on adults’ visual information processing, using an eye-tracking device. Implicit stereotyping is an automatic and implicit process; it happens relatively quickly, outside of awareness. In the presence of a member of a social group, a set of expectations about the characteristics of this social group appears automatically in people’s minds. The study aimed to shed light on the cognitive processes involved in stereotyping and to further investigate the use of eye movements to measure implicit stereotypes. With an eye-tracking device, the eye movements of participants were analyzed, while they viewed everyday scenes depicting women and men in congruent or incongruent gender role activities (e.g., a woman ironing or a man ironing). The settings of these scenes had to be analyzed to infer the character’s role. Also, participants completed an implicit association test that combined the concept of gender with attributes of occupation (home/work), while measuring reaction times to assess participants’ implicit stereotypes about gender. The results showed that implicit stereotypes do influence people’s visual attention; within a fraction of a second, the number of returns, between stereotypical and counter-stereotypical scenes, differed significantly, meaning that participants interpreted the scene itself as a whole before identifying the character. They predicted that, in such a situation, the character was supposed to be a woman or a man. Also, the study showed that eye movements could be used as a fast and reliable supplement for traditional implicit association tests to measure implicit stereotypes. Altogether, this research provides further understanding of implicit stereotypes processing as well as a natural method to study implicit stereotypes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=eye-tracking" title="eye-tracking">eye-tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=implicit%20stereotypes" title=" implicit stereotypes"> implicit stereotypes</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20cognition" title=" social cognition"> social cognition</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20attention" title=" visual attention"> visual attention</a> </p> <a href="https://publications.waset.org/abstracts/116438/the-processing-of-implicit-stereotypes-in-everyday-scene-perception" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/116438.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14989</span> The Differences and Similarities in Neurocognitive Deficits in Mild Traumatic Brain Injury and Depression</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Boris%20Ershov">Boris Ershov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Depression is the most common mood disorder experienced by patients who have sustained a traumatic brain injury (TBI) and is associated with poorer cognitive functional outcomes. However, in some cases, similar cognitive impairments can also be observed in depression. There is not enough information about the features of the cognitive deficit in patients with TBI in relation to patients with depression. TBI patients without depressive symptoms (TBInD, n25), TBI patients with depressive symptoms (TBID, n31), and 28 patients with bipolar II disorder (BP) were included in the study. There were no significant differences in participants in respect to age, handedness and educational level. The patients clinical status was determined by using Montgomery–Asberg Depression Rating Scale (MADRS). All participants completed a cognitive battery (The Brief Assessment of Cognition in Affective Disorders (BAC-A)). Additionally, the Rey–Osterrieth Complex Figure (ROCF) was used to assess visuospatial construction abilities and visual memory, as well as planning and organizational skills. Compared to BP, TBInD and TBID showed a significant impairments in visuomotor abilities, verbal and visual memory. There were no significant differences between BP and TBID groups in working memory, speed of information processing, problem solving. Interference effect (cognitive inhibition) was significantly greater in TBInD and TBID compared to BP. Memory bias towards mood-related information in BP and TBID was greater in comparison with TBInD. These results suggest that depressive symptoms are associated with impairments some executive functions in combination at decrease of speed of information processing. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bipolar%20II%20disorder" title="bipolar II disorder">bipolar II disorder</a>, <a href="https://publications.waset.org/abstracts/search?q=depression" title=" depression"> depression</a>, <a href="https://publications.waset.org/abstracts/search?q=neurocognitive%20deficits" title=" neurocognitive deficits"> neurocognitive deficits</a>, <a href="https://publications.waset.org/abstracts/search?q=traumatic%20brain%20injury" title=" traumatic brain injury"> traumatic brain injury</a> </p> <a href="https://publications.waset.org/abstracts/59107/the-differences-and-similarities-in-neurocognitive-deficits-in-mild-traumatic-brain-injury-and-depression" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59107.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">347</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14988</span> Correlation Analysis between Sensory Processing Sensitivity (SPS), Meares-Irlen Syndrome (MIS) and Dyslexia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kaaryn%20M.%20Cater">Kaaryn M. Cater</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Students with sensory processing sensitivity (SPS), Meares-Irlen Syndrome (MIS) and dyslexia can become overwhelmed and struggle to thrive in traditional tertiary learning environments. An estimated 50% of tertiary students who disclose learning related issues are dyslexic. This study explores the relationship between SPS, MIS and dyslexia. Baseline measures will be analysed to establish any correlation between these three minority methods of information processing. SPS is an innate sensitivity trait found in 15-20% of the population and has been identified in over 100 species of animals. Humans with SPS are referred to as Highly Sensitive People (HSP) and the measure of HSP is a 27 point self-test known as the Highly Sensitive Person Scale (HSPS). A 2016 study conducted by the author established base-line data for HSP students in a tertiary institution in New Zealand. The results of the study showed that all participating HSP students believed the knowledge of SPS to be life-changing and useful in managing life and study, in addition, they believed that all tutors and in-coming students should be given information on SPS. MIS is a visual processing and perception disorder that is found in approximately 10% of the population and has a variety of symptoms including visual fatigue, headaches and nausea. One way to ease some of these symptoms is through the use of colored lenses or overlays. Dyslexia is a complex phonological based information processing variation present in approximately 10% of the population. An estimated 50% of dyslexics are thought to have MIS. The study exploring possible correlations between these minority forms of information processing is due to begin in February 2017. An invitation will be extended to all first year students enrolled in degree programmes across all faculties and schools within the institution. An estimated 900 students will be eligible to participate in the study. Participants will be asked to complete a battery of on-line questionnaires including the Highly Sensitive Person Scale, the International Dyslexia Association adult self-assessment and the adapted Irlen indicator. All three scales have been used extensively in literature and have been validated among many populations. All participants whose score on any (or some) of the three questionnaires suggest a minority method of information processing will receive an invitation to meet with a learning advisor, and given access to counselling services if they choose. Meeting with a learning advisor is not mandatory, and some participants may choose not to receive help. Data will be collected using the Question Pro platform and base-line data will be analysed using correlation and regression analysis to identify relationships and predictors between SPS, MIS and dyslexia. This study forms part of a larger three year longitudinal study and participants will be required to complete questionnaires at annual intervals in subsequent years of the study until completion of (or withdrawal from) their degree. At these data collection points, participants will be questioned on any additional support received relating to their minority method(s) of information processing. Data from this study will be available by April 2017. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dyslexia" title="dyslexia">dyslexia</a>, <a href="https://publications.waset.org/abstracts/search?q=highly%20sensitive%20person%20%28HSP%29" title=" highly sensitive person (HSP)"> highly sensitive person (HSP)</a>, <a href="https://publications.waset.org/abstracts/search?q=Meares-Irlen%20Syndrome%20%28MIS%29" title=" Meares-Irlen Syndrome (MIS)"> Meares-Irlen Syndrome (MIS)</a>, <a href="https://publications.waset.org/abstracts/search?q=minority%20forms%20of%20information%20processing" title=" minority forms of information processing"> minority forms of information processing</a>, <a href="https://publications.waset.org/abstracts/search?q=sensory%20processing%20sensitivity%20%28SPS%29" title=" sensory processing sensitivity (SPS)"> sensory processing sensitivity (SPS)</a> </p> <a href="https://publications.waset.org/abstracts/60208/correlation-analysis-between-sensory-processing-sensitivity-sps-meares-irlen-syndrome-mis-and-dyslexia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/60208.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">245</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">14987</span> Social-Cognitive Aspects of Interpretation: Didactic Approaches in Language Processing and English as a Second Language Difficulties in Dyslexia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Schnell%20Zsuzsanna">Schnell Zsuzsanna</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Background: The interpretation of written texts, language processing in the visual domain, in other words, atypical reading abilities, also known as dyslexia, is an ever-growing phenomenon in today’s societies and educational communities. The much-researched problem affects cognitive abilities and, coupled with normal intelligence normally manifests difficulties in the differentiation of sounds and orthography and in the holistic processing of written words. The factors of susceptibility are varied: social, cognitive psychological, and linguistic factors interact with each other. Methods: The research will explain the psycholinguistics of dyslexia on the basis of several empirical experiments and demonstrate how domain-general abilities of inhibition, retrieval from the mental lexicon, priming, phonological processing, and visual modality transfer affect successful language processing and interpretation. Interpretation of visual stimuli is hindered, and the problem seems to be embedded in a sociocultural, psycholinguistic, and cognitive background. This makes the picture even more complex, suggesting that the understanding and resolving of the issues of dyslexia has to be interdisciplinary, aided by several disciplines in the field of humanities and social sciences, and should be researched from an empirical approach, where the practical, educational corollaries can be analyzed on an applied basis. Aim and applicability: The lecture sheds light on the applied, cognitive aspects of interpretation, social cognitive traits of language processing, the mental underpinnings of cognitive interpretation strategies in different languages (namely, Hungarian and English), offering solutions with a few applied techniques for success in foreign language learning that can be useful advice for the developers of testing methodologies and measures across ESL teaching and testing platforms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dyslexia" title="dyslexia">dyslexia</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20cognition" title=" social cognition"> social cognition</a>, <a href="https://publications.waset.org/abstracts/search?q=transparency" title=" transparency"> transparency</a>, <a href="https://publications.waset.org/abstracts/search?q=modalities" title=" modalities"> modalities</a> </p> <a href="https://publications.waset.org/abstracts/165654/social-cognitive-aspects-of-interpretation-didactic-approaches-in-language-processing-and-english-as-a-second-language-difficulties-in-dyslexia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165654.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">84</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information%20processing&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information%20processing&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information%20processing&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information%20processing&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information%20processing&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information%20processing&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information%20processing&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information%20processing&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information%20processing&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information%20processing&amp;page=500">500</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information%20processing&amp;page=501">501</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information%20processing&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10