CINXE.COM

Search results for: visual information

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: visual information</title> <meta name="description" content="Search results for: visual information"> <meta name="keywords" content="visual information"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="visual information" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="visual information"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 12276</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: visual information</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12276</span> Visual Analytics of Higher Order Information for Trajectory Datasets</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ye%20Wang">Ye Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ickjai%20Lee"> Ickjai Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due to the widespread of mobile sensing, there is a strong need to handle trails of moving objects, trajectories. This paper proposes three visual analytic approaches for higher order information of trajectory data sets based on the higher order Voronoi diagram data structure. Proposed approaches reveal geometrical information, topological, and directional information. Experimental results demonstrate the applicability and usefulness of proposed three approaches. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20analytics" title="visual analytics">visual analytics</a>, <a href="https://publications.waset.org/abstracts/search?q=higher%20order%20information" title=" higher order information"> higher order information</a>, <a href="https://publications.waset.org/abstracts/search?q=trajectory%20datasets" title=" trajectory datasets"> trajectory datasets</a>, <a href="https://publications.waset.org/abstracts/search?q=spatio-temporal%20data" title=" spatio-temporal data"> spatio-temporal data</a> </p> <a href="https://publications.waset.org/abstracts/2630/visual-analytics-of-higher-order-information-for-trajectory-datasets" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2630.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">402</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12275</span> The Contemporary Visual Spectacle: Critical Visual Literacy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lai-Fen%20Yang">Lai-Fen Yang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this increasingly visual world, how can we best decipher and understand the many ways that our everyday lives are organized around looking practices and the many images we encounter each day? Indeed, how we interact with and interpret visual images is a basic component of human life. Today, however, we are living in one of the most artificial visual and image-saturated cultures in human history, which makes understanding the complex construction and multiple social functions of visual imagery more important than ever before. Themes regarding our experience of a visually pervasive mediated culture, here, termed visual spectacle. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20culture" title="visual culture">visual culture</a>, <a href="https://publications.waset.org/abstracts/search?q=contemporary" title=" contemporary"> contemporary</a>, <a href="https://publications.waset.org/abstracts/search?q=images" title=" images"> images</a>, <a href="https://publications.waset.org/abstracts/search?q=literacy" title=" literacy"> literacy</a> </p> <a href="https://publications.waset.org/abstracts/9045/the-contemporary-visual-spectacle-critical-visual-literacy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/9045.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">513</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12274</span> Binocular Heterogeneity in Saccadic Suppression</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Evgeny%20Kozubenko">Evgeny Kozubenko</a>, <a href="https://publications.waset.org/abstracts/search?q=Dmitry%20Shaposhnikov"> Dmitry Shaposhnikov</a>, <a href="https://publications.waset.org/abstracts/search?q=Mikhail%20Petrushan"> Mikhail Petrushan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This work is focused on the study of the binocular characteristics of the phenomenon of perisaccadic suppression in humans when perceiving visual objects. This phenomenon manifests in a decrease in the subject's ability to perceive visual information during saccades, which play an important role in purpose-driven behavior and visual perception. It was shown that the impairment of perception of visual information in the post-saccadic time window is stronger (p < 0.05) in the ipsilateral eye (the eye towards which the saccade occurs). In addition, the observed heterogeneity of post-saccadic suppression in the contralateral and ipsilateral eyes may relate to depth perception. Taking the studied phenomenon into account is important when developing ergonomic control panels in modern operator systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=eye%20movement" title="eye movement">eye movement</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20vision" title=" natural vision"> natural vision</a>, <a href="https://publications.waset.org/abstracts/search?q=saccadic%20suppression" title=" saccadic suppression"> saccadic suppression</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perception" title=" visual perception"> visual perception</a> </p> <a href="https://publications.waset.org/abstracts/137677/binocular-heterogeneity-in-saccadic-suppression" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137677.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">156</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12273</span> Applications of Visual Ethnography in Public Anthropology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Subramaniam%20Panneerselvam">Subramaniam Panneerselvam</a>, <a href="https://publications.waset.org/abstracts/search?q=Gunanithi%20Perumal"> Gunanithi Perumal</a>, <a href="https://publications.waset.org/abstracts/search?q=KP%20Subin"> KP Subin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Visual Ethnography is used to document the culture of a community through a visual means. It could be either photography or audio-visual documentation. The visual ethnographic techniques are widely used in visual anthropology. The visual anthropologists use the camera to capture the cultural image of the studied community. There is a scope for subjectivity while the culture is documented by an external person. But the upcoming of the public anthropology provides an opportunity for the participants to document their own culture. There is a need to equip the participants with the skill of doing visual ethnography. The mobile phone technology provides visual documentation facility to everyone to capture the moments instantly. The visual ethnography facilitates the multiple-interpretation for the audiences. This study explores the effectiveness of visual ethnography among the tribal youth through public anthropology perspective. The case study was conducted to equip the tribal youth of Nilgiris in visual ethnography and the outcome of the experiment shared in this paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20ethnography" title="visual ethnography">visual ethnography</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20anthropology" title=" visual anthropology"> visual anthropology</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20anthropology" title=" public anthropology"> public anthropology</a>, <a href="https://publications.waset.org/abstracts/search?q=multiple-interpretation" title=" multiple-interpretation"> multiple-interpretation</a>, <a href="https://publications.waset.org/abstracts/search?q=case%20study" title=" case study"> case study</a> </p> <a href="https://publications.waset.org/abstracts/127577/applications-of-visual-ethnography-in-public-anthropology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127577.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">183</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12272</span> The Analogy of Visual Arts and Visual Literacy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lindelwa%20Pepu">Lindelwa Pepu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual Arts and Visual Literacy are defined with distinction from one another. Visual Arts are known for art forms such as drawing, painting, and photography, just to name a few. At the same time, Visual Literacy is known for learning through images. The Visual Literacy phenomenon may be attributed to the use of images was first established for creating memories and enjoyment. As time evolved, images became the center and essential means of making contact between people. Gradually, images became a means for interpreting and understanding words through visuals, that being Visual Arts. The purpose of this study is to present the analogy of the two terms Visual Arts and Visual Literacy, which are defined and compared through early practicing visual artists as well as relevant researchers to reveal how they interrelate with one another. This is a qualitative study that uses an interpretive approach as it seeks to understand and explain the interest of the study. The results reveal correspondence of the analogy between the two terms through various writers of early and recent years. This study recommends the significance of the two terms and the role they play in relation to other fields of study. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20arts" title="visual arts">visual arts</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20literacy" title=" visual literacy"> visual literacy</a>, <a href="https://publications.waset.org/abstracts/search?q=pictures" title=" pictures"> pictures</a>, <a href="https://publications.waset.org/abstracts/search?q=images" title=" images"> images</a> </p> <a href="https://publications.waset.org/abstracts/165940/the-analogy-of-visual-arts-and-visual-literacy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165940.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12271</span> Visual Thing Recognition with Binary Scale-Invariant Feature Transform and Support Vector Machine Classifiers Using Color Information</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei-Jong%20Yang">Wei-Jong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei-Hau%20Du"> Wei-Hau Du</a>, <a href="https://publications.waset.org/abstracts/search?q=Pau-Choo%20Chang"> Pau-Choo Chang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jar-Ferr%20Yang"> Jar-Ferr Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Pi-Hsia%20Hung"> Pi-Hsia Hung</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The demands of smart visual thing recognition in various devices have been increased rapidly for daily smart production, living and learning systems in recent years. This paper proposed a visual thing recognition system, which combines binary scale-invariant feature transform (SIFT), bag of words model (BoW), and support vector machine (SVM) by using color information. Since the traditional SIFT features and SVM classifiers only use the gray information, color information is still an important feature for visual thing recognition. With color-based SIFT features and SVM, we can discard unreliable matching pairs and increase the robustness of matching tasks. The experimental results show that the proposed object recognition system with color-assistant SIFT SVM classifier achieves higher recognition rate than that with the traditional gray SIFT and SVM classification in various situations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color%20moments" title="color moments">color moments</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20thing%20recognition%20system" title=" visual thing recognition system"> visual thing recognition system</a>, <a href="https://publications.waset.org/abstracts/search?q=SIFT" title=" SIFT"> SIFT</a>, <a href="https://publications.waset.org/abstracts/search?q=color%20SIFT" title=" color SIFT"> color SIFT</a> </p> <a href="https://publications.waset.org/abstracts/62857/visual-thing-recognition-with-binary-scale-invariant-feature-transform-and-support-vector-machine-classifiers-using-color-information" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62857.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">467</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12270</span> A Comparison of Anger State and Trait Anger Among Adolescents with and without Visual Impairment</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sehmus%20Aslan">Sehmus Aslan</a>, <a href="https://publications.waset.org/abstracts/search?q=Sibel%20Karacaoglu"> Sibel Karacaoglu</a>, <a href="https://publications.waset.org/abstracts/search?q=Cengiz%20Sevgin"> Cengiz Sevgin</a>, <a href="https://publications.waset.org/abstracts/search?q=Ummuhan%20Bas%20Aslan"> Ummuhan Bas Aslan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Objective: Anger expression style is an important moderator of the effects on the person and person’s environment. Anger and anger expression have become important constructs in identifying individuals at high risk for psychological difficulties. To our knowledge, there is no information about anger and anger expression of adolescents with visual impairment. The aim of this study was to compare anger and anger expression among adolescents with and without visual impairment. Methods: Thirty-eight adolescents with visual impairment (18 female, 20 male) and 44 adolescents without visual impairment (22 female, 24 male), in totally 84 adolescents aged between 12 to 15 years, participated in the study. Anger and anger expression of the participants assessed with The State-Trait Anger Scale (STAS). STAS, a self-report questionnaire, is designed to measure the experience and expression of anger. STAS has four subtitles including continuous anger, anger in, anger out and anger control. Reliability and validity of the STAS have been well established among adolescents. Mann-Whitney U Test was used for statistical analysis. Results: No significant differences were found in the scores of continuous anger and anger out between adolescents with and without visual impairment (p < 0.05). On the other hand, there were differences in scores of anger control and anger in between adolescents with and without visual impairment (p>0.05). The score of anger control in adolescents with visual impairment were higher compared with adolescents without visual impairment. Meanwhile, the adolescents with visual impairment had lower score for anger in compared with adolescents without visual impairment. Conclusions: The results of this study suggest that there is no difference in anger level among adolescents with and without visual impairment meanwhile there is difference in anger expression. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=adolescent" title="adolescent">adolescent</a>, <a href="https://publications.waset.org/abstracts/search?q=anger" title=" anger"> anger</a>, <a href="https://publications.waset.org/abstracts/search?q=impaired" title=" impaired"> impaired</a>, <a href="https://publications.waset.org/abstracts/search?q=visual" title=" visual"> visual</a> </p> <a href="https://publications.waset.org/abstracts/62109/a-comparison-of-anger-state-and-trait-anger-among-adolescents-with-and-without-visual-impairment" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/62109.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">413</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12269</span> The Importance of Visual Communication in Artificial Intelligence</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manjitsingh%20Rajput">Manjitsingh Rajput</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual communication plays an important role in artificial intelligence (AI) because it enables machines to understand and interpret visual information, similar to how humans do. This abstract explores the importance of visual communication in AI and emphasizes the importance of various applications such as computer vision, object emphasis recognition, image classification and autonomous systems. In going deeper, with deep learning techniques and neural networks that modify visual understanding, In addition to AI programming, the abstract discusses challenges facing visual interfaces for AI, such as data scarcity, domain optimization, and interpretability. Visual communication and other approaches, such as natural language processing and speech recognition, have also been explored. Overall, this abstract highlights the critical role that visual communication plays in advancing AI capabilities and enabling machines to perceive and understand the world around them. The abstract also explores the integration of visual communication with other modalities like natural language processing and speech recognition, emphasizing the critical role of visual communication in AI capabilities. This methodology explores the importance of visual communication in AI development and implementation, highlighting its potential to enhance the effectiveness and accessibility of AI systems. It provides a comprehensive approach to integrating visual elements into AI systems, making them more user-friendly and efficient. In conclusion, Visual communication is crucial in AI systems for object recognition, facial analysis, and augmented reality, but challenges like data quality, interpretability, and ethics must be addressed. Visual communication enhances user experience, decision-making, accessibility, and collaboration. Developers can integrate visual elements for efficient and accessible AI systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20communication%20AI" title="visual communication AI">visual communication AI</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20aid%20in%20communication" title=" visual aid in communication"> visual aid in communication</a>, <a href="https://publications.waset.org/abstracts/search?q=essence%20of%20visual%20communication." title=" essence of visual communication."> essence of visual communication.</a> </p> <a href="https://publications.waset.org/abstracts/174998/the-importance-of-visual-communication-in-artificial-intelligence" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/174998.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12268</span> Enhanced Visual Sharing Method for Medical Image Security</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kalaivani%20Pachiappan">Kalaivani Pachiappan</a>, <a href="https://publications.waset.org/abstracts/search?q=Sabari%20Annaji"> Sabari Annaji</a>, <a href="https://publications.waset.org/abstracts/search?q=Nithya%20Jayakumar"> Nithya Jayakumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, Information security has emerged as foremost challenges in many fields. Especially in medical information systems security is a major issue, in handling reports such as patients’ diagnosis and medical images. These sensitive data require confidentiality for transmission purposes. Image sharing is a secure and fault-tolerant method for protecting digital images, which can use the cryptography techniques to reduce the information loss. In this paper, visual sharing method is proposed which embeds the patient’s details into a medical image. Then the medical image can be divided into numerous shared images and protected by various users. The original patient details and medical image can be retrieved by gathering the shared images. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=information%20security" title="information security">information security</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20images" title=" medical images"> medical images</a>, <a href="https://publications.waset.org/abstracts/search?q=cryptography" title=" cryptography"> cryptography</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20sharing" title=" visual sharing"> visual sharing</a> </p> <a href="https://publications.waset.org/abstracts/2990/enhanced-visual-sharing-method-for-medical-image-security" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2990.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">414</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12267</span> Visual Identity Components of Tourist Destination</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Petra%20Barisic">Petra Barisic</a>, <a href="https://publications.waset.org/abstracts/search?q=Zrinka%20Blazevic"> Zrinka Blazevic</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the world of modern communications, visual identity has predominant influence on the overall success of tourist destinations, but despite of these, the problem of designing thriving tourist destination visual identity and their components are hardly addressed. This study highlights the importance of building and managing the visual identity of tourist destination, and based on the empirical study of well-known Mediterranean destination of Croatia analyses three main components of tourist destination visual identity; name, slogan, and logo. Moreover, the paper shows how respondents perceive each component of Croatia’s visual identity. According to study, logo is the most important, followed by the name and slogan. Research also reveals that Croatian economy lags behind developed countries in understanding the importance of visual identity, and its influence on marketing goal achievements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=components%20of%20visual%20identity" title="components of visual identity">components of visual identity</a>, <a href="https://publications.waset.org/abstracts/search?q=Croatia" title=" Croatia"> Croatia</a>, <a href="https://publications.waset.org/abstracts/search?q=tourist%20destination" title=" tourist destination"> tourist destination</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20identity" title=" visual identity "> visual identity </a> </p> <a href="https://publications.waset.org/abstracts/6602/visual-identity-components-of-tourist-destination" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/6602.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">1050</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12266</span> Visual and Verbal Imagination in a Bilingual Context</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Erzsebet%20Gulyas">Erzsebet Gulyas</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Our inner world, our imagination, and our way of thinking are invisible and inaudible to others, but they influence our behavior. To investigate the relationship between thinking and language use, we created a test in Hungarian using ideas from the literature. The test prompts participants to make decisions based on visual images derived from the written information presented. There is a correlation (r=0.5) between the test result and the self-assessment of the visual imagery vividness and the visual and verbal components of internal representations measured by self-report questionnaires, as well as with responses to language-use inquiries in the background questionnaire. 56 university students completed the tests, and SPSS was used to analyze the data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=imagination" title="imagination">imagination</a>, <a href="https://publications.waset.org/abstracts/search?q=internal%20representations" title=" internal representations"> internal representations</a>, <a href="https://publications.waset.org/abstracts/search?q=verbalization" title=" verbalization"> verbalization</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a> </p> <a href="https://publications.waset.org/abstracts/182122/visual-and-verbal-imagination-in-a-bilingual-context" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182122.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">54</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12265</span> To Estimate the Association between Visual Stress and Visual Perceptual Skills</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vijay%20Reena%20Durai">Vijay Reena Durai</a>, <a href="https://publications.waset.org/abstracts/search?q=Krithica%20Srinivasan"> Krithica Srinivasan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: The two fundamental skills involved in the growth and wellbeing of any child can be categorized into visual motor and perceptual skills. Visual stress is a disorder which is characterized by visual discomfort, blurred vision, misspelling words, skipping lines, letters bunching together. There is a need to understand the deficits in perceptual skills among children with visual stress. Aim: To estimate the association between visual stress and visual perceptual skills Objective: To compare visual perceptual skills of children with and without visual stress Methodology: Children between 8 to 15 years of age participated in this cross-sectional study. All children with monocular visual acuity better than or equal to 6/6 were included. Visual perceptual skills were measured using test for visual perceptual skills (TVPS) tool. Reading speed was measured with the chosen colored overlay using Wilkins reading chart and pattern glare score was estimated using a 3cpd gratings. Visual stress was defined as change in reading speed of greater than or equal to 10% and a pattern glare score of greater than or equal to 4. Results: 252 children participated in this study and the male: female ratio of 3:2. Majority of the children preferred Magenta (28%) and Yellow (25%) colored overlay for reading. There was a significant difference between the two groups (MD=1.24±0.6) (p<0.04, 95% CI 0.01-2.43) only in the sequential memory skills. The prevalence of visual stress in this group was found to be 31% (n=78). Binary logistic regression showed that odds ratio of having poor visual perceptual skills was OR: 2.85 (95% CI 1.08-7.49) among children with visual stress. Conclusion: Children with visual stress are found to have three times poorer visual perceptual skills than children without visual stress. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20stress" title="visual stress">visual stress</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perceptual%20skills" title=" visual perceptual skills"> visual perceptual skills</a>, <a href="https://publications.waset.org/abstracts/search?q=colored%20overlay" title=" colored overlay"> colored overlay</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20glare" title=" pattern glare"> pattern glare</a> </p> <a href="https://publications.waset.org/abstracts/41580/to-estimate-the-association-between-visual-stress-and-visual-perceptual-skills" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41580.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">388</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12264</span> Bag of Words Representation Based on Weighting Useful Visual Words</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fatma%20Abdedayem">Fatma Abdedayem</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The most effective and efficient methods in image categorization are almost based on bag-of-words (BOW) which presents image by a histogram of occurrence of visual words. In this paper, we propose a novel extension to this method. Firstly, we extract features in multi-scales by applying a color local descriptor named opponent-SIFT. Secondly, in order to represent image we use Spatial Pyramid Representation (SPR) and an extension to the BOW method which based on weighting visual words. Typically, the visual words are weighted during histogram assignment by computing the ratio of their occurrences in the image to the occurrences in the background. Finally, according to classical BOW retrieval framework, only a few words of the vocabulary is useful for image representation. Therefore, we select the useful weighted visual words that respect the threshold value. Experimentally, the algorithm is tested by using different image classes of PASCAL VOC 2007 and is compared against the classical bag-of-visual-words algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=BOW" title="BOW">BOW</a>, <a href="https://publications.waset.org/abstracts/search?q=useful%20visual%20words" title=" useful visual words"> useful visual words</a>, <a href="https://publications.waset.org/abstracts/search?q=weighted%20visual%20words" title=" weighted visual words"> weighted visual words</a>, <a href="https://publications.waset.org/abstracts/search?q=bag%20of%20visual%20words" title=" bag of visual words"> bag of visual words</a> </p> <a href="https://publications.waset.org/abstracts/14009/bag-of-words-representation-based-on-weighting-useful-visual-words" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14009.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">436</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12263</span> Neuron Imaging in Lateral Geniculate Nucleus</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sandy%20Bao">Sandy Bao</a>, <a href="https://publications.waset.org/abstracts/search?q=Yankang%20Bao"> Yankang Bao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The understanding of information that is being processed in the brain, especially in the lateral geniculate nucleus (LGN), has been proven challenging for modern neuroscience and for researchers with a focus on how neurons process signals and images. In this paper, we are proposing a method to image process different colors within different layers of LGN, that is, green information in layers 4 & 6 and red & blue in layers 3 & 5 based on the surface dimension of layers. We take into consideration the images in LGN and visual cortex, and that the edge detected information from the visual cortex needs to be considered in order to return back to the layers of LGN, along with the image in LGN to form the new image, which will provide an improved image that is clearer, sharper, and making it easier to identify objects in the image. Matrix Laboratory (MATLAB) simulation is performed, and results show that the clarity of the output image has significant improvement. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lateral%20geniculate%20nucleus" title="lateral geniculate nucleus">lateral geniculate nucleus</a>, <a href="https://publications.waset.org/abstracts/search?q=matrix%20laboratory" title=" matrix laboratory"> matrix laboratory</a>, <a href="https://publications.waset.org/abstracts/search?q=neuroscience" title=" neuroscience"> neuroscience</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20cortex" title=" visual cortex"> visual cortex</a> </p> <a href="https://publications.waset.org/abstracts/137931/neuron-imaging-in-lateral-geniculate-nucleus" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/137931.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">279</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12262</span> The Involvement of Visual and Verbal Representations Within a Quantitative and Qualitative Visual Change Detection Paradigm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Laura%20Jenkins">Laura Jenkins</a>, <a href="https://publications.waset.org/abstracts/search?q=Tim%20Eschle"> Tim Eschle</a>, <a href="https://publications.waset.org/abstracts/search?q=Joanne%20Ciafone"> Joanne Ciafone</a>, <a href="https://publications.waset.org/abstracts/search?q=Colin%20Hamilton"> Colin Hamilton</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An original working memory model suggested the separation of visual and verbal systems in working memory architecture, in which only visual working memory components were used during visual working memory tasks. It was later suggested that the visuo spatial sketch pad was the only memory component at use during visual working memory tasks, and components such as the phonological loop were not considered. In more recent years, a contrasting approach has been developed with the use of an executive resource to incorporate both visual and verbal representations in visual working memory paradigms. This was supported using research demonstrating the use of verbal representations and an executive resource in a visual matrix patterns task. The aim of the current research is to investigate the working memory architecture during both a quantitative and a qualitative visual working memory task. A dual task method will be used. Three secondary tasks will be used which are designed to hit specific components within the working memory architecture – Dynamic Visual Noise (visual components), Visual Attention (spatial components) and Verbal Attention (verbal components). A comparison of the visual working memory tasks will be made to discover if verbal representations are at use, as the previous literature suggested. This direct comparison has not been made so far in the literature. Considerations will be made as to whether a domain specific approach should be employed when discussing visual working memory tasks, or whether a more domain general approach could be used instead. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=semantic%20organisation" title="semantic organisation">semantic organisation</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20memory" title=" visual memory"> visual memory</a>, <a href="https://publications.waset.org/abstracts/search?q=change%20detection" title=" change detection"> change detection</a> </p> <a href="https://publications.waset.org/abstracts/22696/the-involvement-of-visual-and-verbal-representations-within-a-quantitative-and-qualitative-visual-change-detection-paradigm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/22696.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">595</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12261</span> Deprivation of Visual Information Affects Differently the Gait Cycle in Children with Different Level of Motor Competence</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Miriam%20Palomo-Nieto">Miriam Palomo-Nieto</a>, <a href="https://publications.waset.org/abstracts/search?q=Adrian%20Agricola"> Adrian Agricola</a>, <a href="https://publications.waset.org/abstracts/search?q=Rudolf%20Psotta"> Rudolf Psotta</a>, <a href="https://publications.waset.org/abstracts/search?q=Reza%20Abdollahipour"> Reza Abdollahipour</a>, <a href="https://publications.waset.org/abstracts/search?q=Ludvik%20Valtr"> Ludvik Valtr</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The importance of vision and the visual control of movement have been labeled in the literature related to motor control and many studies have demonstrated that children with low motor competence may rely more heavily on vision to perform movements than their typically developing peers. The aim of the study was to highlight the effects of different visual conditions on motor performance during walking in children with different levels of motor coordination. Participants (n = 32, mean age = 8.5 years sd. ± 0.5) were divided into two groups: typical development (TD) and low motor coordination (LMC) based on the scores of the Movement Assessment Battery for Children (MABC-2). They were asked to walk along a 10 meters walkway where the Optojump-Next instrument was installed in a portable laboratory (15 x 3 m), which allows that all participants had the same visual information. They walked in self-selected speed under four visual conditions: full vision (FV), limited vision 100 ms (LV-100), limited vision 150 ms (LV-150) and non-vision (NV). For visual occlusion participants were equipped with Plato Goggles that shut for 100 and 150 ms, respectively, within each 2 sec. Data were analyzed in a two-way mixed-effect ANOVA including 2 (TD vs. LMC) x 4 (FV, LV-100, LV-150 & NV) with repeated-measures on the last factor (p ≤.05). Results indicated that TD children walked faster and with longer normalized steps length and strides than LMC children. For TD children the percentage of the single support and swing time were higher than for low motor competence children. However, the percentage of load response and pre swing was higher in the low motor competence children rather than the TD children. These findings indicated that through walking we could be able to identify different levels of motor coordination in children. Likewise, LMC children showed shorter percentages in those parameters regarding only one leg support, supporting the idea of balance problems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=visual%20information" title="visual information">visual information</a>, <a href="https://publications.waset.org/abstracts/search?q=motor%20performance" title=" motor performance"> motor performance</a>, <a href="https://publications.waset.org/abstracts/search?q=walking%20pattern" title=" walking pattern"> walking pattern</a>, <a href="https://publications.waset.org/abstracts/search?q=optojump" title=" optojump"> optojump</a> </p> <a href="https://publications.waset.org/abstracts/15449/deprivation-of-visual-information-affects-differently-the-gait-cycle-in-children-with-different-level-of-motor-competence" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15449.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">574</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12260</span> Communication Design in Newspapers: A Comparative Study of Graphic Resources in Portuguese and Spanish Publications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=F%C3%A1tima%20Gon%C3%A7alves">Fátima Gonçalves</a>, <a href="https://publications.waset.org/abstracts/search?q=Joaquim%20Brigas"> Joaquim Brigas</a>, <a href="https://publications.waset.org/abstracts/search?q=Jorge%20Gon%C3%A7alves"> Jorge Gonçalves</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As a way of managing the increasing volume and complexity of information that circulates in the present time, graphical representations are increasingly used, which add meaning to the information presented in communication media, through an efficient communication design. The visual culture itself, driven by technological evolution, has been redefining the forms of communication, so that contemporary visual communication represents a major impact on society. This article presents the results and respective comparative analysis of four publications in the Iberian press, focusing on the formal aspects of newspapers and the space they dedicate to the various communication elements. Two Portuguese newspapers and two Spanish newspapers were selected for this purpose. The findings indicated that the newspapers show a similarity in the use of graphic solutions, which corroborate a visual trend in communication design. The results also reveal that Spanish newspapers are more meticulous with graphic consistency. This study intended to contribute to improving knowledge of the Iberian generalist press. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=communication%20design" title="communication design">communication design</a>, <a href="https://publications.waset.org/abstracts/search?q=graphic%20resources" title=" graphic resources"> graphic resources</a>, <a href="https://publications.waset.org/abstracts/search?q=Iberian%20press" title=" Iberian press"> Iberian press</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20journalism" title=" visual journalism"> visual journalism</a> </p> <a href="https://publications.waset.org/abstracts/87512/communication-design-in-newspapers-a-comparative-study-of-graphic-resources-in-portuguese-and-spanish-publications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87512.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">269</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12259</span> Image Multi-Feature Analysis by Principal Component Analysis for Visual Surface Roughness Measurement</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wei%20Zhang">Wei Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20He"> Yan He</a>, <a href="https://publications.waset.org/abstracts/search?q=Yan%20Wang"> Yan Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yufeng%20Li"> Yufeng Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Chuanpeng%20Hao"> Chuanpeng Hao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Surface roughness is an important index for evaluating surface quality, needs to be accurately measured to ensure the performance of the workpiece. The roughness measurement based on machine vision involves various image features, some of which are redundant. These redundant features affect the accuracy and speed of the visual approach. Previous research used correlation analysis methods to select the appropriate features. However, this feature analysis is independent and cannot fully utilize the information of data. Besides, blindly reducing features lose a lot of useful information, resulting in unreliable results. Therefore, the focus of this paper is on providing a redundant feature removal approach for visual roughness measurement. In this paper, the statistical methods and gray-level co-occurrence matrix(GLCM) are employed to extract the texture features of machined images effectively. Then, the principal component analysis(PCA) is used to fuse all extracted features into a new one, which reduces the feature dimension and maintains the integrity of the original information. Finally, the relationship between new features and roughness is established by the support vector machine(SVM). The experimental results show that the approach can effectively solve multi-feature information redundancy of machined surface images and provides a new idea for the visual evaluation of surface roughness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20analysis" title="feature analysis">feature analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20vision" title=" machine vision"> machine vision</a>, <a href="https://publications.waset.org/abstracts/search?q=PCA" title=" PCA"> PCA</a>, <a href="https://publications.waset.org/abstracts/search?q=surface%20roughness" title=" surface roughness"> surface roughness</a>, <a href="https://publications.waset.org/abstracts/search?q=SVM" title=" SVM"> SVM</a> </p> <a href="https://publications.waset.org/abstracts/138525/image-multi-feature-analysis-by-principal-component-analysis-for-visual-surface-roughness-measurement" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138525.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">212</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12258</span> Research on Detection of Web Page Visual Salience Region Based on Eye Tracker and Spectral Residual Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xiaoying%20Guo">Xiaoying Guo</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiangyun%20Wang"> Xiangyun Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chunhua%20Jia"> Chunhua Jia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Web page has been one of the most important way of knowing the world. Humans catch a lot of information from it everyday. Thus, understanding where human looks when they surfing the web pages is rather important. In normal scenes, the down-top features and top-down tasks significantly affect humans’ eye movement. In this paper, we investigated if the conventional visual salience algorithm can properly predict humans’ visual attractive region when they viewing the web pages. First, we obtained the eye movement data when the participants viewing the web pages using an eye tracker. By the analysis of eye movement data, we studied the influence of visual saliency and thinking way on eye-movement pattern. The analysis result showed that thinking way affect human’ eye-movement pattern much more than visual saliency. Second, we compared the results of web page visual salience region extracted by Itti model and Spectral Residual (SR) model. The results showed that Spectral Residual (SR) model performs superior than Itti model by comparison with the heat map from eye movements. Considering the influence of mind habit on humans’ visual region of interest, we introduced one of the most important cue in mind habit-fixation position to improved the SR model. The result showed that the improved SR model can better predict the human visual region of interest in web pages. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=web%20page%20salience%20region" title="web page salience region">web page salience region</a>, <a href="https://publications.waset.org/abstracts/search?q=eye-tracker" title=" eye-tracker"> eye-tracker</a>, <a href="https://publications.waset.org/abstracts/search?q=spectral%20residual" title=" spectral residual"> spectral residual</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20salience" title=" visual salience"> visual salience</a> </p> <a href="https://publications.waset.org/abstracts/74168/research-on-detection-of-web-page-visual-salience-region-based-on-eye-tracker-and-spectral-residual-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/74168.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">275</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12257</span> Information Literacy: Concept and Importance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gaurav%20Kumar">Gaurav Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An information literate person is one who uses information effectively in all its forms. When presented with questions or problems, an information literate person would know what information to look for, how to search efficiently and be able to access relevant sources. In addition, an information literate person would have the ability to evaluate and select appropriate information sources and to use the information effectively and ethically to answer questions or solve problems. Information literacy has become an important element in higher education. The information literacy movement has internationally recognized standards and learning outcomes. The step-by-step process of achieving information literacy is particularly crucial in an era where knowledge could be disseminated through a variety of media. What is the relationship between information literacy as we define it in higher education and information literacy among non-academic populations? What forces will change how we think about the definition of information literacy in the future and how we will apply the definition in all environments? <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=information%20literacy" title="information literacy">information literacy</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20beings" title=" human beings"> human beings</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20media%20and%20computer%20network%20etc" title=" visual media and computer network etc"> visual media and computer network etc</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20literacy" title=" information literacy"> information literacy</a> </p> <a href="https://publications.waset.org/abstracts/36349/information-literacy-concept-and-importance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36349.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">339</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12256</span> Segmentation of Korean Words on Korean Road Signs</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lae-Jeong%20Park">Lae-Jeong Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Kyusoo%20Chung"> Kyusoo Chung</a>, <a href="https://publications.waset.org/abstracts/search?q=Jungho%20Moon"> Jungho Moon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces an effective method of segmenting Korean text (place names in Korean) from a Korean road sign image. A Korean advanced directional road sign is composed of several types of visual information such as arrows, place names in Korean and English, and route numbers. Automatic classification of the visual information and extraction of Korean place names from the road sign images make it possible to avoid a lot of manual inputs to a database system for management of road signs nationwide. We propose a series of problem-specific heuristics that correctly segments Korean place names, which is the most crucial information, from the other information by leaving out non-text information effectively. The experimental results with a dataset of 368 road sign images show 96% of the detection rate per Korean place name and 84% per road sign image. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=segmentation" title="segmentation">segmentation</a>, <a href="https://publications.waset.org/abstracts/search?q=road%20signs" title=" road signs"> road signs</a>, <a href="https://publications.waset.org/abstracts/search?q=characters" title=" characters"> characters</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a> </p> <a href="https://publications.waset.org/abstracts/42000/segmentation-of-korean-words-on-korean-road-signs" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42000.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">444</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12255</span> Visual Improvement with Low Vision Aids in Children with Stargardt’s Disease</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anum%20Akhter">Anum Akhter</a>, <a href="https://publications.waset.org/abstracts/search?q=Sumaira%20Altaf"> Sumaira Altaf</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Purpose: To study the effect of low vision devices i.e. telescope and magnifying glasses on distance visual acuity and near visual acuity of children with Stargardt’s disease. Setting: Low vision department, Alshifa Trust Eye Hospital, Rawalpindi, Pakistan. Methods: 52 children having Stargardt’s disease were included in the study. All children were diagnosed by pediatrics ophthalmologists. Comprehensive low vision assessment was done by me in Low vision clinic. Visual acuity was measured using ETDRS chart. Refraction and other supplementary tests were performed. Children with Stargardt’s disease were provided with different telescopes and magnifying glasses for improving far vision and near vision. Results: Out of 52 children, 17 children were males and 35 children were females. Distance visual acuity and near visual acuity improved significantly with low vision aid trial. All children showed visual acuity better than 6/19 with a telescope of higher magnification. Improvement in near visual acuity was also significant with magnifying glasses trial. Conclusions: Low vision aids are useful for improvement in visual acuity in children. Children with Stargardt’s disease who are having a problem in education and daily life activities can get help from low vision aids. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stargardt" title="Stargardt">Stargardt</a>, <a href="https://publications.waset.org/abstracts/search?q=s%20disease" title="s disease">s disease</a>, <a href="https://publications.waset.org/abstracts/search?q=low%20vision%20aids" title=" low vision aids"> low vision aids</a>, <a href="https://publications.waset.org/abstracts/search?q=telescope" title=" telescope"> telescope</a>, <a href="https://publications.waset.org/abstracts/search?q=magnifiers" title=" magnifiers"> magnifiers</a> </p> <a href="https://publications.waset.org/abstracts/24382/visual-improvement-with-low-vision-aids-in-children-with-stargardts-disease" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24382.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">538</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12254</span> Aspects of Semiotics in Contemporary Design: A Case Study on Dice Brand</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Laila%20Zahran%20Mohammed%20Alsibani">Laila Zahran Mohammed Alsibani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of the research is to understand the aspects of semiotics in contemporary designs by redesigning an Omani donut brand with localized cultural identity. To do so, visual identity samples of Dice brand of donuts in Oman has been selected to be a case study. This study conducted based on semiotic theory by using mixed method research tools which are: documentation analysis, interview and survey. The literature review concentrates on key areas of semiotics in visual elements used in the brand designs. Also, it spotlights on the categories of semiotics in visual design. In addition, this research explores the visual cues in brand identity. The objectives of the research are to investigate the aspects of semiotics in providing meaning to visual cues and to identify visual cues for each visual element. It is hoped that this study will have the contribution to a better understanding of the different ways of using semiotics in contemporary designs. Moreover, this research can be a review of further studies in understanding and explaining current and future design trends. Future research can also focus on how brand-related signs are perceived by consumers. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brands" title="brands">brands</a>, <a href="https://publications.waset.org/abstracts/search?q=semiotics" title=" semiotics"> semiotics</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20arts" title=" visual arts"> visual arts</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20communication" title=" visual communication"> visual communication</a> </p> <a href="https://publications.waset.org/abstracts/158274/aspects-of-semiotics-in-contemporary-design-a-case-study-on-dice-brand" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/158274.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">159</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12253</span> Development of Visual Element Design Guidelines for Consumer Products Based on User Characteristics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Taezoon%20Park">Taezoon Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Wonil%20Hwang"> Wonil Hwang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study aims to build a design guideline for the effective visual display used for consumer products considering user characteristics; gender and age. Although a number of basic experiments identified the limits of human visual perception, the findings remain fragmented and many times in an unfriendly form. This study compiled a design cases along with tables aggregated from the experimental result of visual perception; brightness/contrast, useful field of view, color sensitivity. Visual design elements commonly used for consumer product, were selected and appropriate guidelines were developed based on the experimental result. Since the provided data with case example suggests a feasible design space, it will save time for a product designer to find appropriate design alternatives. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=design%20guideline" title="design guideline">design guideline</a>, <a href="https://publications.waset.org/abstracts/search?q=consumer%20product" title=" consumer product"> consumer product</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20design%20element" title=" visual design element"> visual design element</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20perception" title=" visual perception"> visual perception</a>, <a href="https://publications.waset.org/abstracts/search?q=emotional%20design" title=" emotional design"> emotional design</a> </p> <a href="https://publications.waset.org/abstracts/55080/development-of-visual-element-design-guidelines-for-consumer-products-based-on-user-characteristics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/55080.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">372</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12252</span> Task Distraction vs. Visual Enhancement: Which Is More Effective?</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Huangmei%20Liu">Huangmei Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Si%20Liu"> Si Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jia%E2%80%99nan%20Liu"> Jia’nan Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present experiment investigated and compared the effectiveness of two kinds of methods of attention control: Task distraction and visual enhancement. In the study, the effectiveness of task distractions to explicit features and of visual enhancement to implicit features of the same group of Chinese characters were compared based on their effect on the participants’ reaction time, subjective confidence rating, and verbal report. We found support that the visual enhancement on implicit features did overcome the contrary effect of training distraction and led to awareness of those implicit features, at least to some extent. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=task%20distraction" title="task distraction">task distraction</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20enhancement" title=" visual enhancement"> visual enhancement</a>, <a href="https://publications.waset.org/abstracts/search?q=attention" title=" attention"> attention</a>, <a href="https://publications.waset.org/abstracts/search?q=awareness" title=" awareness"> awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=learning" title=" learning"> learning</a> </p> <a href="https://publications.waset.org/abstracts/3302/task-distraction-vs-visual-enhancement-which-is-more-effective" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3302.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">430</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12251</span> Visual Impairment Through Contextualized Lived Experiences: The Story of James</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jentel%20Van%20Havermaet">Jentel Van Havermaet</a>, <a href="https://publications.waset.org/abstracts/search?q=Geert%20Van%20Hove"> Geert Van Hove</a>, <a href="https://publications.waset.org/abstracts/search?q=Elisabeth%20De%20Schauwer"> Elisabeth De Schauwer</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study re-conceptualizes visual impairment in the interdependent context of James, his family, and allies. Living with a visual impairment is understood as an entanglement of assemblages, dynamics, disablism, systems… We narrated this diffractively into two meaningful events: decisions and processes on (inclusive) education and hinderances in connecting with others. We entangled and (un)raveled lived experiences in assemblages in which the contextualized meaning of visual impairment became more clearly. The contextualized narrative of James interwove complex intra-actions; showed the complexity and contextualization of entangled relationalities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=disability%20studies" title="disability studies">disability studies</a>, <a href="https://publications.waset.org/abstracts/search?q=contextualization" title=" contextualization"> contextualization</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20impairment" title=" visual impairment"> visual impairment</a>, <a href="https://publications.waset.org/abstracts/search?q=assemblage" title=" assemblage"> assemblage</a>, <a href="https://publications.waset.org/abstracts/search?q=entanglement" title=" entanglement"> entanglement</a>, <a href="https://publications.waset.org/abstracts/search?q=lived%20experiences" title=" lived experiences"> lived experiences</a> </p> <a href="https://publications.waset.org/abstracts/146643/visual-impairment-through-contextualized-lived-experiences-the-story-of-james" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146643.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">177</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12250</span> The Impact of Online Learning on Visual Learners</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ani%20Demetrashvili">Ani Demetrashvili</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As online learning continues to reshape the landscape of education, questions arise regarding its efficacy for diverse learning styles, particularly for visual learners. This abstract delves into the impact of online learning on visual learners, exploring how digital mediums influence their educational experience and how educational platforms can be optimized to cater to their needs. Visual learners comprise a significant portion of the student population, characterized by their preference for visual aids such as diagrams, charts, and videos to comprehend and retain information. Traditional classroom settings often struggle to accommodate these learners adequately, relying heavily on auditory and written forms of instruction. The advent of online learning presents both opportunities and challenges in addressing the needs of visual learners. Online learning platforms offer a plethora of multimedia resources, including interactive simulations, virtual labs, and video lectures, which align closely with the preferences of visual learners. These platforms have the potential to enhance engagement, comprehension, and retention by presenting information in visually stimulating formats. However, the effectiveness of online learning for visual learners hinges on various factors, including the design of learning materials, user interface, and instructional strategies. Research into the impact of online learning on visual learners encompasses a multidisciplinary approach, drawing from fields such as cognitive psychology, education, and human-computer interaction. Studies employ qualitative and quantitative methods to assess visual learners' preferences, cognitive processes, and learning outcomes in online environments. Surveys, interviews, and observational studies provide insights into learners' preferences for specific types of multimedia content and interactive features. Cognitive tasks, such as memory recall and concept mapping, shed light on the cognitive mechanisms underlying learning in digital settings. Eye-tracking studies offer valuable data on attentional patterns and information processing during online learning activities. The findings from research on the impact of online learning on visual learners have significant implications for educational practice and technology design. Educators and instructional designers can use insights from this research to create more engaging and effective learning materials for visual learners. Strategies such as incorporating visual cues, providing interactive activities, and scaffolding complex concepts with multimedia resources can enhance the learning experience for visual learners in online environments. Moreover, online learning platforms can leverage the findings to improve their user interface and features, making them more accessible and inclusive for visual learners. Customization options, adaptive learning algorithms, and personalized recommendations based on learners' preferences and performance can enhance the usability and effectiveness of online platforms for visual learners. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=online%20learning" title="online learning">online learning</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20learners" title=" visual learners"> visual learners</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20education" title=" digital education"> digital education</a>, <a href="https://publications.waset.org/abstracts/search?q=technology%20in%20learning" title=" technology in learning"> technology in learning</a> </p> <a href="https://publications.waset.org/abstracts/187032/the-impact-of-online-learning-on-visual-learners" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/187032.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">38</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12249</span> Audio-Visual Recognition Based on Effective Model and Distillation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Heng%20Yang">Heng Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Tao%20Luo"> Tao Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Yakun%20Zhang"> Yakun Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Kai%20Wang"> Kai Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Wei%20Qin"> Wei Qin</a>, <a href="https://publications.waset.org/abstracts/search?q=Liang%20Xie"> Liang Xie</a>, <a href="https://publications.waset.org/abstracts/search?q=Ye%20Yan"> Ye Yan</a>, <a href="https://publications.waset.org/abstracts/search?q=Erwei%20Yin"> Erwei Yin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Recent years have seen that audio-visual recognition has shown great potential in a strong noise environment. The existing method of audio-visual recognition has explored methods with ResNet and feature fusion. However, on the one hand, ResNet always occupies a large amount of memory resources, restricting the application in engineering. On the other hand, the feature merging also brings some interferences in a high noise environment. In order to solve the problems, we proposed an effective framework with bidirectional distillation. At first, in consideration of the good performance in extracting of features, we chose the light model, Efficientnet as our extractor of spatial features. Secondly, self-distillation was applied to learn more information from raw data. Finally, we proposed a bidirectional distillation in decision-level fusion. In more detail, our experimental results are based on a multi-model dataset from 24 volunteers. Eventually, the lipreading accuracy of our framework was increased by 2.3% compared with existing systems, and our framework made progress in audio-visual fusion in a high noise environment compared with the system of audio recognition without visual. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=lipreading" title="lipreading">lipreading</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual" title=" audio-visual"> audio-visual</a>, <a href="https://publications.waset.org/abstracts/search?q=Efficientnet" title=" Efficientnet"> Efficientnet</a>, <a href="https://publications.waset.org/abstracts/search?q=distillation" title=" distillation"> distillation</a> </p> <a href="https://publications.waset.org/abstracts/146625/audio-visual-recognition-based-on-effective-model-and-distillation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/146625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">134</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12248</span> Reproduction of New Media Art Village around NTUT: Heterotopia of Visual Culture Art Education</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yu%20Cheng-Yu">Yu Cheng-Yu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> ‘Heterotopia’, ‘Visual Cultural Art Education’ and ‘New Media’ of these three subjects seemingly are irrelevant. In fact, there are synchronicity and intertextuality inside. In addition to visual culture, art education inspires students the ability to reflect on popular culture image through visual culture teaching strategies in school. We should get involved in the community to construct the learning environment that conveys visual culture art. This thesis attempts to probe the heterogeneity of space and value from Michel Foucault and to research sustainable development strategy in ‘New Media Art Village’ heterogeneity from Jean Baudrillard, Marshall McLuhan's media culture theory and social construction ideology. It is possible to find a new media group that can convey ‘Visual Culture Art Education’ around the National Taipei University of Technology in this commercial district that combines intelligent technology, fashion, media, entertainment, art education, and marketing network. Let the imagination and innovation of ‘New Media Art Village’ become ‘implementable’ and new media Heterotopia of inter-subjectivity with the engagement of big data and digital media. Visual culture art education will also bring aesthetics into the community by New Media Art Village. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=social%20construction" title="social construction">social construction</a>, <a href="https://publications.waset.org/abstracts/search?q=heterogeneity" title=" heterogeneity"> heterogeneity</a>, <a href="https://publications.waset.org/abstracts/search?q=new%20media" title=" new media"> new media</a>, <a href="https://publications.waset.org/abstracts/search?q=big%20data" title=" big data"> big data</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20culture%20art%20education" title=" visual culture art education"> visual culture art education</a> </p> <a href="https://publications.waset.org/abstracts/86311/reproduction-of-new-media-art-village-around-ntut-heterotopia-of-visual-culture-art-education" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86311.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">248</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">12247</span> Students’ Awareness of the Use of Poster, Power Point and Animated Video Presentations: A Case Study of Third Year Students of the Department of English of Batna University</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bahloul%20Amel">Bahloul Amel</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present study debates students’ perceptions of the use of technology in learning English as a Foreign Language. Its aim is to explore and understand students’ preparation and presentation of Posters, PowerPoint and Animated Videos by drawing attention to visual and oral elements. The data is collected through observations and semi-structured interviews and analyzed through phenomenological data analysis steps. The themes emerged from the data, visual learning satisfaction in using information and communication technology, providing structure to oral presentation, learning from peers’ presentations, draw attention to using Posters, PowerPoint and Animated Videos as each supports visual learning and organization of thoughts in oral presentations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=EFL" title="EFL">EFL</a>, <a href="https://publications.waset.org/abstracts/search?q=posters" title=" posters"> posters</a>, <a href="https://publications.waset.org/abstracts/search?q=PowerPoint%20presentations" title=" PowerPoint presentations"> PowerPoint presentations</a>, <a href="https://publications.waset.org/abstracts/search?q=Animated%20Videos" title=" Animated Videos"> Animated Videos</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20learning" title=" visual learning"> visual learning</a> </p> <a href="https://publications.waset.org/abstracts/16255/students-awareness-of-the-use-of-poster-power-point-and-animated-video-presentations-a-case-study-of-third-year-students-of-the-department-of-english-of-batna-university" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16255.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">445</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information&amp;page=409">409</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information&amp;page=410">410</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=visual%20information&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10