CINXE.COM
Search results for: multimodal
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: multimodal</title> <meta name="description" content="Search results for: multimodal"> <meta name="keywords" content="multimodal"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="multimodal" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="multimodal"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 222</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: multimodal</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">222</span> Identity Verification Based on Multimodal Machine Learning on Red Green Blue (RGB) Red Green Blue-Depth (RGB-D) Voice Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=LuoJiaoyang">LuoJiaoyang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu%20Hongyang"> Yu Hongyang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we experimented with a new approach to multimodal identification using RGB, RGB-D and voice data. The multimodal combination of RGB and voice data has been applied in tasks such as emotion recognition and has shown good results and stability, and it is also the same in identity recognition tasks. We believe that the data of different modalities can enhance the effect of the model through mutual reinforcement. We try to increase the three modalities on the basis of the dual modalities and try to improve the effectiveness of the network by increasing the number of modalities. We also implemented the single-modal identification system separately, tested the data of these different modalities under clean and noisy conditions, and compared the performance with the multimodal model. In the process of designing the multimodal model, we tried a variety of different fusion strategies and finally chose the fusion method with the best performance. The experimental results show that the performance of the multimodal system is better than that of the single modality, especially in dealing with noise, and the multimodal system can achieve an average improvement of 5%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=three%20modalities" title=" three modalities"> three modalities</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB-D" title=" RGB-D"> RGB-D</a>, <a href="https://publications.waset.org/abstracts/search?q=identity%20verification" title=" identity verification"> identity verification</a> </p> <a href="https://publications.waset.org/abstracts/163265/identity-verification-based-on-multimodal-machine-learning-on-red-green-blue-rgb-red-green-blue-depth-rgb-d-voice-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163265.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">221</span> A Comparative Study on Multimodal Metaphors in Public Service Advertising of China and Germany</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xing%20Lyu">Xing Lyu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multimodal metaphor promotes the further development and refinement of multimodal discourse study. Cultural aspects matter a lot not only in creating but also in comprehending multimodal metaphor. By analyzing the target domain and the source domain in 10 public service advertisements of China and Germany about environmental protection, this paper compares the source when the target is alike in each multimodal metaphor in order to seek similarities and differences across cultures. The findings are as follows: first, the multimodal metaphors center around three major topics: the earth crisis, consequences of environmental damage, and appeal for environmental protection; second, the multimodal metaphors mainly grounded in three universal conceptual metaphors which focused on high level is up; earth is mother and all lives are precious. However, there are five Chinese culture-specific multimodal metaphors which are not discovered in Germany ads: east is high leve; a purposeful life is a journey; a nation is a person; good is clean, and water is mother. Since metaphors are excellent instruments on studying ideology, this study can be helpful on intercultural/cross-cultural communication. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal%20metaphor" title="multimodal metaphor">multimodal metaphor</a>, <a href="https://publications.waset.org/abstracts/search?q=cultural%20aspects" title=" cultural aspects"> cultural aspects</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20service%20advertising" title=" public service advertising"> public service advertising</a>, <a href="https://publications.waset.org/abstracts/search?q=cross-cultural%20communication" title=" cross-cultural communication"> cross-cultural communication</a> </p> <a href="https://publications.waset.org/abstracts/112889/a-comparative-study-on-multimodal-metaphors-in-public-service-advertising-of-china-and-germany" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112889.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">173</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">220</span> The Optimization of Decision Rules in Multimodal Decision-Level Fusion Scheme</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andrey%20V.%20Timofeev">Andrey V. Timofeev</a>, <a href="https://publications.waset.org/abstracts/search?q=Dmitry%20V.%20Egorov"> Dmitry V. Egorov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces an original method of parametric optimization of the structure for multimodal decision-level fusion scheme which combines the results of the partial solution of the classification task obtained from assembly of the mono-modal classifiers. As a result, a multimodal fusion classifier which has the minimum value of the total error rate has been obtained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification%20accuracy" title="classification accuracy">classification accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion%20solution" title=" fusion solution"> fusion solution</a>, <a href="https://publications.waset.org/abstracts/search?q=total%20error%20rate" title=" total error rate"> total error rate</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20fusion%20classifier" title=" multimodal fusion classifier"> multimodal fusion classifier</a> </p> <a href="https://publications.waset.org/abstracts/26088/the-optimization-of-decision-rules-in-multimodal-decision-level-fusion-scheme" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26088.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">466</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">219</span> Multimodal Data Fusion Techniques in Audiovisual Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hadeer%20M.%20Sayed">Hadeer M. Sayed</a>, <a href="https://publications.waset.org/abstracts/search?q=Hesham%20E.%20El%20Deeb"> Hesham E. El Deeb</a>, <a href="https://publications.waset.org/abstracts/search?q=Shereen%20A.%20Taie"> Shereen A. Taie</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the big data era, we are facing a diversity of datasets from different sources in different domains that describe a single life event. These datasets consist of multiple modalities, each of which has a different representation, distribution, scale, and density. Multimodal fusion is the concept of integrating information from multiple modalities in a joint representation with the goal of predicting an outcome through a classification task or regression task. In this paper, multimodal fusion techniques are classified into two main classes: model-agnostic techniques and model-based approaches. It provides a comprehensive study of recent research in each class and outlines the benefits and limitations of each of them. Furthermore, the audiovisual speech recognition task is expressed as a case study of multimodal data fusion approaches, and the open issues through the limitations of the current studies are presented. This paper can be considered a powerful guide for interested researchers in the field of multimodal data fusion and audiovisual speech recognition particularly. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal%20data" title="multimodal data">multimodal data</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20fusion" title=" data fusion"> data fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition" title=" audio-visual speech recognition"> audio-visual speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/157362/multimodal-data-fusion-techniques-in-audiovisual-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157362.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">111</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">218</span> OPEN-EmoRec-II-A Multimodal Corpus of Human-Computer Interaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stefanie%20Rukavina">Stefanie Rukavina</a>, <a href="https://publications.waset.org/abstracts/search?q=Sascha%20Gruss"> Sascha Gruss</a>, <a href="https://publications.waset.org/abstracts/search?q=Steffen%20Walter"> Steffen Walter</a>, <a href="https://publications.waset.org/abstracts/search?q=Holger%20Hoffmann"> Holger Hoffmann</a>, <a href="https://publications.waset.org/abstracts/search?q=Harald%20C.%20Traue"> Harald C. Traue</a> </p> <p class="card-text"><strong>Abstract:</strong></p> OPEN-EmoRecII is an open multimodal corpus with experimentally induced emotions. In the first half of the experiment, emotions were induced with standardized picture material and in the second half during a human-computer interaction (HCI), realized with a wizard-of-oz design. The induced emotions are based on the dimensional theory of emotions (valence, arousal and dominance). These emotional sequences - recorded with multimodal data (mimic reactions, speech, audio and physiological reactions) during a naturalistic-like HCI-environment one can improve classification methods on a multimodal level. This database is the result of an HCI-experiment, for which 30 subjects in total agreed to a publication of their data including the video material for research purposes. The now available open corpus contains sensory signal of: video, audio, physiology (SCL, respiration, BVP, EMG Corrugator supercilii, EMG Zygomaticus Major) and mimic annotations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=open%20multimodal%20emotion%20corpus" title="open multimodal emotion corpus">open multimodal emotion corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=annotated%20labels" title=" annotated labels"> annotated labels</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20interaction" title=" intelligent interaction"> intelligent interaction</a> </p> <a href="https://publications.waset.org/abstracts/29365/open-emorec-ii-a-multimodal-corpus-of-human-computer-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29365.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">416</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">217</span> New Approach for Constructing a Secure Biometric Database</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Kebbeb">A. Kebbeb</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Mostefai"> M. Mostefai</a>, <a href="https://publications.waset.org/abstracts/search?q=F.%20Benmerzoug"> F. Benmerzoug</a>, <a href="https://publications.waset.org/abstracts/search?q=Y.%20Chahir"> Y. Chahir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The multimodal biometric identification is the combination of several biometric systems. The challenge of this combination is to reduce some limitations of systems based on a single modality while significantly improving performance. In this paper, we propose a new approach to the construction and the protection of a multimodal biometric database dedicated to an identification system. We use a topological watermarking to hide the relation between face image and the registered descriptors extracted from other modalities of the same person for more secure user identification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometric%20databases" title="biometric databases">biometric databases</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20biometrics" title=" multimodal biometrics"> multimodal biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20authentication" title=" security authentication"> security authentication</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20watermarking" title=" digital watermarking"> digital watermarking</a> </p> <a href="https://publications.waset.org/abstracts/3126/new-approach-for-constructing-a-secure-biometric-database" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3126.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">390</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">216</span> Teaching and Learning with Picturebooks: Developing Multimodal Literacy with a Community of Primary School Teachers in China</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fuling%20Deng">Fuling Deng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Today’s children are frequently exposed to multimodal texts that adopt diverse modes to communicate myriad meanings within different cultural contexts. To respond to the new textual landscape, scholars have considered new literacy theories which propose picturebooks as important educational resources. Picturebooks are multimodal, with their meaning conveyed through the synchronisation of multiple modes, including linguistic, visual, spatial, and gestural acting as access to multimodal literacy. Picturebooks have been popular reading materials in primary educational settings in China. However, often viewed as “easy” texts directed at the youngest readers, picturebooks remain on the margins of Chinese upper primary classrooms, where they are predominantly used for linguistic tasks, with little value placed on their multimodal affordances. Practices with picturebooks in the upper grades in Chinese primary schools also encounter many challenges associated with the curation of texts for use, designing curriculum, and assessment. To respond to these issues, a qualitative study was conducted with a community of Chinese primary teachers using multi-methods such as interviews, focus groups, and documents. The findings showed the impact of the teachers’ increased awareness of picturebooks' multimodal affordances on their pedagogical decisions in using picturebooks as educational resources in upper primary classrooms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=picturebook%20education" title="picturebook education">picturebook education</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20literacy" title=" multimodal literacy"> multimodal literacy</a>, <a href="https://publications.waset.org/abstracts/search?q=teachers%27%20response%20to%20contemporary%20picturebooks" title=" teachers' response to contemporary picturebooks"> teachers' response to contemporary picturebooks</a>, <a href="https://publications.waset.org/abstracts/search?q=community%20of%20practice" title=" community of practice"> community of practice</a> </p> <a href="https://publications.waset.org/abstracts/156547/teaching-and-learning-with-picturebooks-developing-multimodal-literacy-with-a-community-of-primary-school-teachers-in-china" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156547.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">215</span> Multimodal Content: Fostering Students’ Language and Communication Competences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Victoria%20L.%20Malakhova">Victoria L. Malakhova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The research is devoted to multimodal content and its effectiveness in developing students’ linguistic and intercultural communicative competences as an indefeasible constituent of their future professional activity. Description of multimodal content both as a linguistic and didactic phenomenon makes the study relevant. The objective of the article is the analysis of creolized texts and the effect they have on fostering higher education students’ skills and their productivity. The main methods used are linguistic text analysis, qualitative and quantitative methods, deduction, generalization. The author studies texts with full and partial creolization, their features and role in composing multimodal textual space. The main verbal and non-verbal markers and paralinguistic means that enhance the linguo-pragmatic potential of creolized texts are covered. To reveal the efficiency of multimodal content application in English teaching, the author conducts an experiment among both undergraduate students and teachers. This allows specifying main functions of creolized texts in the process of language learning, detecting ways of enhancing students’ competences, and increasing their motivation. The described stages of using creolized texts can serve as an algorithm for work with multimodal content in teaching English as a foreign language. The findings contribute to improving the efficiency of the academic process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=creolized%20text" title="creolized text">creolized text</a>, <a href="https://publications.waset.org/abstracts/search?q=English%20language%20learning" title=" English language learning"> English language learning</a>, <a href="https://publications.waset.org/abstracts/search?q=higher%20education" title=" higher education"> higher education</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20and%20communication%20competences" title=" language and communication competences"> language and communication competences</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20content" title=" multimodal content"> multimodal content</a> </p> <a href="https://publications.waset.org/abstracts/151423/multimodal-content-fostering-students-language-and-communication-competences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151423.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">112</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">214</span> A Proposal of Multi-modal Teaching Model for College English</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Huang%20Yajing">Huang Yajing</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multimodal discourse refers to the phenomenon of using various senses such as hearing, vision, and touch to communicate through various means and symbolic resources such as language, images, sounds, and movements. With the development of modern technology and multimedia, language and technology have become inseparable, and foreign language teaching is becoming more and more modal. Teacher-student communication resorts to multiple senses and uses multiple symbol systems to construct and interpret meaning. The classroom is a semiotic space where multimodal discourses are intertwined. College English multi-modal teaching is to rationally utilize traditional teaching methods while mobilizing and coordinating various modern teaching methods to form a joint force to promote teaching and learning. Multimodal teaching makes full and reasonable use of various meaning resources and can maximize the advantages of multimedia and network environments. Based upon the above theories about multimodal discourse and multimedia technology, the present paper will propose a multi-modal teaching model for college English in China. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal%20discourse" title="multimodal discourse">multimodal discourse</a>, <a href="https://publications.waset.org/abstracts/search?q=multimedia%20technology" title=" multimedia technology"> multimedia technology</a>, <a href="https://publications.waset.org/abstracts/search?q=English%20education" title=" English education"> English education</a>, <a href="https://publications.waset.org/abstracts/search?q=applied%20linguistics" title=" applied linguistics"> applied linguistics</a> </p> <a href="https://publications.waset.org/abstracts/183810/a-proposal-of-multi-modal-teaching-model-for-college-english" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183810.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">68</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">213</span> An Exploration of Promoting EFL Students’ Language Learning Autonomy Using Multimodal Teaching - A Case Study of an Art University in Western China</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dian%20Guan">Dian Guan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the wide application of multimedia and the Internet, the development of teaching theories, and the implementation of teaching reforms, many different university English classroom teaching modes have emerged. The university English teaching mode is changing from the traditional teaching mode based on conversation and text to the multimodal English teaching mode containing discussion, pictures, audio, film, etc. Applying university English teaching models is conducive to cultivating lifelong learning skills. In addition, lifelong learning skills can also be called learners' autonomous learning skills. Learners' independent learning ability has a significant impact on English learning. However, many university students, especially art and design students, don't know how to learn individually. When they become university students, their English foundation is a relative deficiency because they always remember the language in a traditional way, which, to a certain extent, neglects the cultivation of English learners' independent ability. As a result, the autonomous learning ability of most university students is not satisfactory. The participants in this study were 60 students and one teacher in their first year at a university in western China. Two observations and interviews were conducted inside and outside the classroom to understand the impact of a multimodal teaching model of university English on students' autonomous learning ability. The results were analyzed, and it was found that the multimodal teaching model of university English significantly affected learners' autonomy. Incorporating classroom presentations and poster exhibitions into multimodal teaching can increase learners' interest in learning and enhance their learning ability outside the classroom. However, further exploration is needed to develop multimodal teaching materials and evaluate multimodal teaching outcomes. Despite the limitations of this study, the study adopts a scientific research method to analyze the impact of the multimodal teaching mode of university English on students' independent learning ability. It puts forward a different outlook for further research on this topic. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=art%20university" title="art university">art university</a>, <a href="https://publications.waset.org/abstracts/search?q=EFL%20education" title=" EFL education"> EFL education</a>, <a href="https://publications.waset.org/abstracts/search?q=learner%20autonomy" title=" learner autonomy"> learner autonomy</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20pedagogy" title=" multimodal pedagogy"> multimodal pedagogy</a> </p> <a href="https://publications.waset.org/abstracts/176520/an-exploration-of-promoting-efl-students-language-learning-autonomy-using-multimodal-teaching-a-case-study-of-an-art-university-in-western-china" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176520.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">212</span> Multimodal Characterization of Emotion within Multimedia Space</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dayo%20Samuel%20Banjo">Dayo Samuel Banjo</a>, <a href="https://publications.waset.org/abstracts/search?q=Connice%20Trimmingham"> Connice Trimmingham</a>, <a href="https://publications.waset.org/abstracts/search?q=Niloofar%20Yousefi"> Niloofar Yousefi</a>, <a href="https://publications.waset.org/abstracts/search?q=Nitin%20Agarwal"> Nitin Agarwal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Technological advancement and its omnipresent connection have pushed humans past the boundaries and limitations of a computer screen, physical state, or geographical location. It has provided a depth of avenues that facilitate human-computer interaction that was once inconceivable such as audio and body language detection. Given the complex modularities of emotions, it becomes vital to study human-computer interaction, as it is the commencement of a thorough understanding of the emotional state of users and, in the context of social networks, the producers of multimodal information. This study first acknowledges the accuracy of classification found within multimodal emotion detection systems compared to unimodal solutions. Second, it explores the characterization of multimedia content produced based on their emotions and the coherence of emotion in different modalities by utilizing deep learning models to classify emotion across different modalities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=affective%20computing" title="affective computing">affective computing</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title=" emotion recognition"> emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal" title=" multimodal"> multimodal</a> </p> <a href="https://publications.waset.org/abstracts/157830/multimodal-characterization-of-emotion-within-multimedia-space" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157830.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">156</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">211</span> Multimodal Sentiment Analysis With Web Based Application</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shreyansh%20Singh">Shreyansh Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Afroz%20Ahmed"> Afroz Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sentiment Analysis intends to naturally reveal the hidden mentality that we hold towards an entity. The total of this assumption over a populace addresses sentiment surveying and has various applications. Current text-based sentiment analysis depends on the development of word embeddings and Machine Learning models that take in conclusion from enormous text corpora. Sentiment Analysis from text is presently generally utilized for consumer loyalty appraisal and brand insight investigation. With the expansion of online media, multimodal assessment investigation is set to carry new freedoms with the appearance of integral information streams for improving and going past text-based feeling examination using the new transforms methods. Since supposition can be distinguished through compelling follows it leaves, like facial and vocal presentations, multimodal opinion investigation offers good roads for examining facial and vocal articulations notwithstanding the record or printed content. These methodologies use the Recurrent Neural Networks (RNNs) with the LSTM modes to increase their performance. In this study, we characterize feeling and the issue of multimodal assessment investigation and audit ongoing advancements in multimodal notion examination in various spaces, including spoken surveys, pictures, video websites, human-machine, and human-human connections. Difficulties and chances of this arising field are additionally examined, promoting our theory that multimodal feeling investigation holds critical undiscovered potential. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sentiment%20analysis" title="sentiment analysis">sentiment analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=RNN" title=" RNN"> RNN</a>, <a href="https://publications.waset.org/abstracts/search?q=LSTM" title=" LSTM"> LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=word%20embeddings" title=" word embeddings"> word embeddings</a> </p> <a href="https://publications.waset.org/abstracts/150082/multimodal-sentiment-analysis-with-web-based-application" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150082.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">119</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">210</span> Integrating Critical Stylistics and Visual Grammar: A Multimodal Stylistic Approach to the Analysis of Non-Literary Texts</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shatha%20Khuzaee">Shatha Khuzaee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The study develops multimodal stylistic approach to analyse a number of BBC online news articles reporting some key events from the so called ‘Arab Uprisings’. Critical stylistics (CS) and visual grammar (VG) provide insightful arguments to the ways ideology is projected through different verbal and visual modes, yet they are mode specific because they examine how each mode projects its meaning separately and do not attempt to clarify what happens intersemiotically when the two modes co-occur. Therefore, it is the task undertaken in this research to propose multimodal stylistic approach that addresses the issue of ideology construction when the two modes co-occur. Informed by functional grammar and social semiotics, the analysis attempts to integrate three linguistic models developed in critical stylistics, namely, transitivity choices, prioritizing and hypothesizing along with their visual equivalents adopted from visual grammar to investigate the way ideology is constructed, in multimodal text, when text/image participate and interrelate in the process of meaning making on the textual level of analysis. The analysis provides comprehensive theoretical and analytical elaborations on the different points of integration between CS linguistic models and VG equivalents which operate on the textual level of analysis to better account for ideology construction in news as non-literary multimodal texts. It is argued that the analysis well thought out a plan that would remark the first step towards the integration between the well-established linguistic models of critical stylistics and that of visual analysis to analyse multimodal texts on the textual level. Both approaches are compatible to produce multimodal stylistic approach because they intend to analyse text and image depending on whatever textual evidence is available. This supports the analysis maintain the rigor and replicability needed for a stylistic analysis like the one undertaken in this study. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodality" title="multimodality">multimodality</a>, <a href="https://publications.waset.org/abstracts/search?q=stylistics" title=" stylistics"> stylistics</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20grammar" title=" visual grammar"> visual grammar</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20semiotics" title=" social semiotics"> social semiotics</a>, <a href="https://publications.waset.org/abstracts/search?q=functional%20grammar" title=" functional grammar"> functional grammar</a> </p> <a href="https://publications.waset.org/abstracts/77486/integrating-critical-stylistics-and-visual-grammar-a-multimodal-stylistic-approach-to-the-analysis-of-non-literary-texts" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77486.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">221</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">209</span> Two Weeks of Multi-Modal Inpatient Treatment: Patients Suffering from Chronic Musculoskeletal Pain for over 12 Months</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20Schafer">D. Schafer</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20Booke"> H. Booke</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Nordmeier"> R. Nordmeier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Patients suffering from chronic musculoskeletal pain ( > 12 months) are a challenging clientele for pain specialists. A multimodal approach, characterized by a two weeks inpatient treatment, often is the ultimate therapeutic attempt. The lasting effects of such a multimodal approach were analyzed, especially since two weeks of inpatient therapy, although very intense, often seem too short to make a difference in patients suffering from chronic pain for years. The study includes 32 consecutive patients suffering from chronic pain over years who underwent a two weeks multimodal inpatient treatment of pain. Twelve months after discharge, each patient was interviewed to objectify any lasting effects. Pain was measured on admission and 12 months after discharge using the numeric rating scale (NRS). For statistics, a paired students' t-test was used. Significance was defined as p < 0.05. The average intensity of pain on admission was 8,6 on the NRS. Twelve months after discharge, the intensity of pain was still reduced by an average of 48% (average NRS 4,4), p < 0.05. Despite this significant improvement in pain severity, two thirds (66%) of the patients still judge their treatment as not sufficient. In conclusion, inpatient treatment of chronic pain has a long-lasting effect on the intensity of pain in patients suffering from chronic musculoskeletal pain for more than 12 months. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chronic%20pain" title="chronic pain">chronic pain</a>, <a href="https://publications.waset.org/abstracts/search?q=inpatient%20treatment" title=" inpatient treatment"> inpatient treatment</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20pain%20treatment" title=" multimodal pain treatment"> multimodal pain treatment</a>, <a href="https://publications.waset.org/abstracts/search?q=musculoskeletal%20pain" title=" musculoskeletal pain"> musculoskeletal pain</a> </p> <a href="https://publications.waset.org/abstracts/130697/two-weeks-of-multi-modal-inpatient-treatment-patients-suffering-from-chronic-musculoskeletal-pain-for-over-12-months" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130697.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">165</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">208</span> Navigating the Case-Based Learning Multimodal Learning Environment: A Qualitative Study Across the First-Year Medical Students</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bhavani%20Veasuvalingam">Bhavani Veasuvalingam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Case-based learning (CBL) is a popular instructional method aimed to bridge theory to clinical practice. This study aims to explore CBL mixed modality curriculum in influencing students’ learning styles and strategies that support learning. An explanatory sequential mixed method study was employed with initial phase, 44-itemed Felderman’s Index of Learning Style (ILS) questionnaire employed across year one medical students (n=142) using convenience sampling to describe the preferred learning styles. The qualitative phase utilised three focus group discussions (FGD) to explore in depth on the multimodal learning style exhibited by the students. Most students preferred combination of learning stylesthat is reflective, sensing, visual and sequential i.e.: RSVISeq style (24.64%) from the ILS analysis. The frequency of learning preference from processing to understanding were well balanced, with sequential-global domain (66.2%); sensing-intuitive (59.86%), active- reflective (57%), and visual-verbal (51.41%). The qualitative data reported three major themes, namely Theme 1: CBL mixed modalities navigates learners’ learning style; Theme 2: Multimodal learners active learning strategies supports learning. Theme 3: CBL modalities facilitating theory into clinical knowledge. Both quantitative and qualitative study strongly reports the multimodal learning style of the year one medical students. Medical students utilise multimodal learning styles to attain the clinical knowledge when learning with CBL mixed modalities. Educators’ awareness of the multimodal learning style is crucial in delivering the CBL mixed modalities effectively, considering strategic pedagogical support students to engage and learn CBL in bridging the theoretical knowledge into clinical practice. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=case-based%20learning" title="case-based learning">case-based learning</a>, <a href="https://publications.waset.org/abstracts/search?q=learnign%20style" title=" learnign style"> learnign style</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20students" title=" medical students"> medical students</a>, <a href="https://publications.waset.org/abstracts/search?q=learning" title=" learning"> learning</a> </p> <a href="https://publications.waset.org/abstracts/151162/navigating-the-case-based-learning-multimodal-learning-environment-a-qualitative-study-across-the-first-year-medical-students" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151162.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">207</span> Analysing Techniques for Fusing Multimodal Data in Predictive Scenarios Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Philipp%20Ruf">Philipp Ruf</a>, <a href="https://publications.waset.org/abstracts/search?q=Massiwa%20Chabbi"> Massiwa Chabbi</a>, <a href="https://publications.waset.org/abstracts/search?q=Christoph%20Reich"> Christoph Reich</a>, <a href="https://publications.waset.org/abstracts/search?q=Djaffar%20Ould-Abdeslam"> Djaffar Ould-Abdeslam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, convolutional neural networks (CNN) have demonstrated high performance in image analysis, but oftentimes, there is only structured data available regarding a specific problem. By interpreting structured data as images, CNNs can effectively learn and extract valuable insights from tabular data, leading to improved predictive accuracy and uncovering hidden patterns that may not be apparent in traditional structured data analysis. In applying a single neural network for analyzing multimodal data, e.g., both structured and unstructured information, significant advantages in terms of time complexity and energy efficiency can be achieved. Converting structured data into images and merging them with existing visual material offers a promising solution for applying CNN in multimodal datasets, as they often occur in a medical context. By employing suitable preprocessing techniques, structured data is transformed into image representations, where the respective features are expressed as different formations of colors and shapes. In an additional step, these representations are fused with existing images to incorporate both types of information. This final image is finally analyzed using a CNN. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CNN" title="CNN">CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=tabular%20data" title=" tabular data"> tabular data</a>, <a href="https://publications.waset.org/abstracts/search?q=mixed%20dataset" title=" mixed dataset"> mixed dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20transformation" title=" data transformation"> data transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20fusion" title=" multimodal fusion"> multimodal fusion</a> </p> <a href="https://publications.waset.org/abstracts/171840/analysing-techniques-for-fusing-multimodal-data-in-predictive-scenarios-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171840.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">123</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">206</span> Dual Biometrics Fusion Based Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Prakash">Prakash</a>, <a href="https://publications.waset.org/abstracts/search?q=Vikash%20Kumar"> Vikash Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Vinay%20Bansal"> Vinay Bansal</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20N.%20Das"> L. N. Das</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dual biometrics is a subpart of multimodal biometrics, which refers to the use of a variety of modalities to identify and authenticate persons rather than just one. We limit the risks of mistakes by mixing several modals, and hackers have a tiny possibility of collecting information. Our goal is to collect the precise characteristics of iris and palmprint, produce a fusion of both methodologies, and ensure that authentication is only successful when the biometrics match a particular user. After combining different modalities, we created an effective strategy with a mean DI and EER of 2.41 and 5.21, respectively. A biometric system has been proposed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=palmprint" title=" palmprint"> palmprint</a>, <a href="https://publications.waset.org/abstracts/search?q=Iris" title=" Iris"> Iris</a>, <a href="https://publications.waset.org/abstracts/search?q=EER" title=" EER"> EER</a>, <a href="https://publications.waset.org/abstracts/search?q=DI" title=" DI"> DI</a> </p> <a href="https://publications.waset.org/abstracts/149996/dual-biometrics-fusion-based-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149996.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">205</span> A Multimodal Approach to Improve the Performance of Biometric System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chander%20Kant">Chander Kant</a>, <a href="https://publications.waset.org/abstracts/search?q=Arun%20Kumar"> Arun Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometric systems automatically recognize an individual based on his/her physiological and behavioral characteristics. There are also some traits like weight, age, height etc. that may not provide reliable user recognition because of there common and temporary nature. These traits are called soft bio metric traits. Although soft bio metric traits are lack of permanence to uniquely and reliably identify an individual, yet they provide some beneficial evidence about the user identity and may improve the system performance. Here in this paper, we have proposed an approach for integrating the soft bio metrics with fingerprint and face to improve the performance of personal authentication system. In our approach we have proposed a combined architecture of three different sensors to elevate the system performance. The approach includes, soft bio metrics, fingerprint and face traits. We have also proven the efficiency of proposed system regarding FAR (False Acceptance Ratio) and total response time, with the help of MUBI (Multimodal Bio metrics Integration) software. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=FAR" title="FAR">FAR</a>, <a href="https://publications.waset.org/abstracts/search?q=minutiae%20point" title=" minutiae point"> minutiae point</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20bio%20metrics" title=" multimodal bio metrics"> multimodal bio metrics</a>, <a href="https://publications.waset.org/abstracts/search?q=primary%20bio%20metric" title=" primary bio metric"> primary bio metric</a>, <a href="https://publications.waset.org/abstracts/search?q=soft%20bio%20metric" title=" soft bio metric"> soft bio metric</a> </p> <a href="https://publications.waset.org/abstracts/12625/a-multimodal-approach-to-improve-the-performance-of-biometric-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">346</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">204</span> Filmic and Verbal Metafphors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manana%20Rusieshvili">Manana Rusieshvili</a>, <a href="https://publications.waset.org/abstracts/search?q=Rusudan%20Dolidze"> Rusudan Dolidze</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims at 1) investigating the ways in which a traditional, monomodal written verbal metaphor can be transposed as a monomodal non-verbal (visual) or multimodal (aural and -visual) filmic metaphor ; 2) exploring similarities and differences in the process of encoding and decoding of monomodal and multimodal metaphors. The empiric data, on which the research is based, embrace three sources: the novel by Harry Gray ‘The Hoods’, the script of the film ‘Once Upon a Time in America’ (English version by David Mills) and the resultant film by Sergio Leone. In order to achieve the above mentioned goals, the research focuses on the following issues: 1) identification of verbal and non-verbal monomodal and multimodal metaphors in the above-mentioned sources and 2) investigation of the ways and modes the specific written monomodal metaphors appearing in the novel and the script are enacted in the film and become visual, aural or visual-aural filmic metaphors ; 3) study of the factors which play an important role in contributing to the encoding and decoding of the filmic metaphor. The collection and analysis of the data were carried out in two stages: firstly, the relevant data, i.e. the monomodal metaphors from the novel, the script and the film were identified and collected. In the second, final stage the metaphors taken from all of the three sources were analysed, compared and two types of phenomena were selected for discussion: (1) the monomodal written metaphors found in the novel and/or in the script which become monomodal visual/aural metaphors in the film; (2) the monomodal written metaphors found in the novel and/or in the script which become multimodal, filmic (visual-aural) metaphors in the film. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=encoding" title="encoding">encoding</a>, <a href="https://publications.waset.org/abstracts/search?q=decoding" title=" decoding"> decoding</a>, <a href="https://publications.waset.org/abstracts/search?q=filmic%20metaphor" title=" filmic metaphor"> filmic metaphor</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodality" title=" multimodality"> multimodality</a> </p> <a href="https://publications.waset.org/abstracts/24927/filmic-and-verbal-metafphors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24927.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">526</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">203</span> Efficient Layout-Aware Pretraining for Multimodal Form Understanding</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Armineh%20Nourbakhsh">Armineh Nourbakhsh</a>, <a href="https://publications.waset.org/abstracts/search?q=Sameena%20Shah"> Sameena Shah</a>, <a href="https://publications.waset.org/abstracts/search?q=Carolyn%20Rose"> Carolyn Rose</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Layout-aware language models have been used to create multimodal representations for documents that are in image form, achieving relatively high accuracy in document understanding tasks. However, the large number of parameters in the resulting models makes building and using them prohibitive without access to high-performing processing units with large memory capacity. We propose an alternative approach that can create efficient representations without the need for a neural visual backbone. This leads to an 80% reduction in the number of parameters compared to the smallest SOTA model, widely expanding applicability. In addition, our layout embeddings are pre-trained on spatial and visual cues alone and only fused with text embeddings in downstream tasks, which can facilitate applicability to low-resource of multi-lingual domains. Despite using 2.5% of training data, we show competitive performance on two form understanding tasks: semantic labeling and link prediction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=layout%20understanding" title="layout understanding">layout understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=form%20understanding" title=" form understanding"> form understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20document%20understanding" title=" multimodal document understanding"> multimodal document understanding</a>, <a href="https://publications.waset.org/abstracts/search?q=bias-augmented%20attention" title=" bias-augmented attention"> bias-augmented attention</a> </p> <a href="https://publications.waset.org/abstracts/147955/efficient-layout-aware-pretraining-for-multimodal-form-understanding" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147955.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">148</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">202</span> A Multimodal Approach towards Intersemiotic Translations of 'The Great Gatsby'</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Neda%20Razavi%20Kaleibar">Neda Razavi Kaleibar</a>, <a href="https://publications.waset.org/abstracts/search?q=Bahloul%20Salmani"> Bahloul Salmani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The present study dealt with the multimodal analysis of two cinematic adaptations of The Great Gatsby as intersemiotic translation. The assessment in this study went beyond the faithfulness based on repetition, addition, deletion, and creation which limit the analysis from other aspects. In fact, this research aimed to pinpoint the role of multimodality in examining the intersemiotic translations of the novel into film by means of analyzing different applied modes. Through a qualitative type of research, the analysis was conducted based on the theory proposed by Burn as Kineikonic mode theory derived from the concept of multimodality. The results of the study revealed that due to the applied modes, each adaptation represents a sense and meaning different from the other one. Analyzing the results and discussions, it was concluded that not only the modes have an undeniable role in film adaptations, but rather multimodal analysis including different nonverbal modes can be a useful and functional choice for analyzing the intersemiotic translations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cinematic%20adaptation" title="cinematic adaptation">cinematic adaptation</a>, <a href="https://publications.waset.org/abstracts/search?q=intersemiotic%20translation" title=" intersemiotic translation"> intersemiotic translation</a>, <a href="https://publications.waset.org/abstracts/search?q=kineikonic%20mode" title=" kineikonic mode"> kineikonic mode</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodality" title=" multimodality"> multimodality</a> </p> <a href="https://publications.waset.org/abstracts/63177/a-multimodal-approach-towards-intersemiotic-translations-of-the-great-gatsby" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/63177.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">421</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">201</span> Analyzing Political Cartoons in Arabic-Language Media after Trump's Jerusalem Move: A Multimodal Discourse Perspective</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Inas%20Hussein">Inas Hussein</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Communication in the modern world is increasingly becoming multimodal due to globalization and the digital space we live in which have remarkably affected how people communicate. Accordingly, Multimodal Discourse Analysis (MDA) is an emerging paradigm in discourse studies with the underlying assumption that other semiotic resources such as images, colours, scientific symbolism, gestures, actions, music and sound, etc. combine with language in order to communicate meaning. One of the effective multimodal media that combines both verbal and non-verbal elements to create meaning is political cartoons. Furthermore, since political and social issues are mirrored in political cartoons, these are regarded as potential objects of discourse analysis since they not only reflect the thoughts of the public but they also have the power to influence them. The aim of this paper is to analyze some selected cartoons on the recognition of Jerusalem as Israel's capital by the American President, Donald Trump, adopting a multimodal approach. More specifically, the present research examines how the various semiotic tools and resources utilized by the cartoonists function in projecting the intended meaning. Ten political cartoons, among a surge of editorial cartoons highlighted by the Anti-Defamation League (ADL) - an international Jewish non-governmental organization based in the United States - as publications in different Arabic-language newspapers in Egypt, Saudi Arabia, UAE, Oman, Iran and UK, were purposively selected for semiotic analysis. These editorial cartoons, all published during 6<sup>th</sup>–18<sup>th</sup> December 2017, invariably suggest one theme: Jewish and Israeli domination of the United States. The data were analyzed using the framework of Visual Social Semiotics. In accordance with this methodological framework, the selected visual compositions were analyzed in terms of three aspects of meaning: representational, interactive and compositional. In analyzing the selected cartoons, an interpretative approach is being adopted. This approach prioritizes depth to breadth and enables insightful analyses of the chosen cartoons. The findings of the study reveal that semiotic resources are key elements of political cartoons due to the inherent political communication they convey. It is proved that adequate interpretation of the three aspects of meaning is a prerequisite for understanding the intended meaning of political cartoons. It is recommended that further research should be conducted to provide more insightful analyses of political cartoons from a multimodal perspective. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Multimodal%20Discourse%20Analysis%20%28MDA%29" title="Multimodal Discourse Analysis (MDA)">Multimodal Discourse Analysis (MDA)</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20text" title=" multimodal text"> multimodal text</a>, <a href="https://publications.waset.org/abstracts/search?q=political%20cartoons" title=" political cartoons"> political cartoons</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20modality" title=" visual modality"> visual modality</a> </p> <a href="https://publications.waset.org/abstracts/99614/analyzing-political-cartoons-in-arabic-language-media-after-trumps-jerusalem-move-a-multimodal-discourse-perspective" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/99614.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">240</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">200</span> Modeling of Building a Conceptual Scheme for Multimodal Freight Transportation Information System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gia%20Surguladze">Gia Surguladze</a>, <a href="https://publications.waset.org/abstracts/search?q=Nino%20Topuria"> Nino Topuria</a>, <a href="https://publications.waset.org/abstracts/search?q=Lily%20Petriashvili"> Lily Petriashvili</a>, <a href="https://publications.waset.org/abstracts/search?q=Giorgi%20Surguladze"> Giorgi Surguladze</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Modeling of building processes of a multimodal freight transportation support information system is discussed based on modern CASE technologies. Functional efficiencies of ports in the eastern part of the Black Sea are analyzed taking into account their ecological, seasonal, resource usage parameters. By resources, we mean capacities of berths, cranes, automotive transport, as well as work crews and neighbouring airports. For the purpose of designing database of computer support system for Managerial (Logistics) function, using Object-Role Modeling (ORM) tool (NORMA – Natural ORM Architecture) is proposed, after which Entity Relationship Model (ERM) is generated in automated process. The software is developed based on Process-Oriented and Service-Oriented architecture, in Visual Studio.NET environment. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=seaport%20resources" title="seaport resources">seaport resources</a>, <a href="https://publications.waset.org/abstracts/search?q=business-processes" title=" business-processes"> business-processes</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20transportation" title=" multimodal transportation"> multimodal transportation</a>, <a href="https://publications.waset.org/abstracts/search?q=CASE%20technology" title=" CASE technology"> CASE technology</a>, <a href="https://publications.waset.org/abstracts/search?q=object-role%20model" title=" object-role model"> object-role model</a>, <a href="https://publications.waset.org/abstracts/search?q=entity%20relationship%20model" title=" entity relationship model"> entity relationship model</a>, <a href="https://publications.waset.org/abstracts/search?q=SOA" title=" SOA"> SOA</a> </p> <a href="https://publications.waset.org/abstracts/32046/modeling-of-building-a-conceptual-scheme-for-multimodal-freight-transportation-information-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/32046.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">431</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">199</span> Multimodal Discourse, Logic of the Analysis of Transmedia Strategies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bianca%20Su%C3%A1rez%20Puerta">Bianca Suárez Puerta</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multimodal discourse refers to a method of study the media continuum between reality, screens as a device, audience, author, and media as a production from the audience. For this study we used semantic differential, a method proposed in the sixties by Osgood, Suci and Tannenbaum, starts from the assumption that under each particular way of perceiving the world, in each singular idea, there is a common cultural meaning that organizes experiences. In relation to these shared symbolic dimension, this method has had significant results, as it focuses on breaking down the meaning of certain significant acts into series of statements that place the subjects in front of some concepts. In Colombia, in 2016, a tool was designed to measure the meaning of a multimodal production, specially the acts of sense of transmedia productions that managed to receive funds from the Ministry of ICT of Colombia, and also, to analyze predictable patterns that can be found in calls and funds aimed at the production of culture in Colombia, in the context of the peace agreement, as a request for expressions from a hegemonic place, seeking to impose a worldview. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=semantic%20differential" title="semantic differential">semantic differential</a>, <a href="https://publications.waset.org/abstracts/search?q=semiotics" title=" semiotics"> semiotics</a>, <a href="https://publications.waset.org/abstracts/search?q=transmedia" title=" transmedia"> transmedia</a>, <a href="https://publications.waset.org/abstracts/search?q=critical%20analysis%20of%20discourse" title=" critical analysis of discourse"> critical analysis of discourse</a> </p> <a href="https://publications.waset.org/abstracts/76655/multimodal-discourse-logic-of-the-analysis-of-transmedia-strategies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/76655.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">205</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">198</span> Comics Scanlation and Publishing Houses Translation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sharifa%20Alshahrani">Sharifa Alshahrani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Comics is a multimodal text wherein meaning is created by taking in all modes of expression at once. It uses two different semiotic modes, the verbal and the visual modes, together to make meaning and these different semiotic modes can be socially and culturally shaped to give meaning. Therefore, comics translation cannot treat comics as a monomodal text by translating only the verbal mode inside or outside the speech balloons as the cultural differences are encoded in the visual mode as well. Due to the development of the internet and editing software, comics translation is not anymore confined to the publishing houses and official translation as scanlation, or the fan translation took the initiative in translating comics for being emotionally attracted to the culture and genre. Scanlation is carried out by volunteering fans who translate out of passion. However, quality is one of the debatable issues relating to scanlation and fan translation. This study will investigate how the dynamic multimodal relationship in comics is exploited and interpreted in the translation by exploring the translation strategies and procedures adopted by the publishing houses and scanlation in interpreting comics into Arabic using three analytical frameworks; cultural references model, multimodal relation model and translation strategies and procedures models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=comics" title="comics">comics</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodality" title=" multimodality"> multimodality</a>, <a href="https://publications.waset.org/abstracts/search?q=translation" title=" translation"> translation</a>, <a href="https://publications.waset.org/abstracts/search?q=scanlation" title=" scanlation"> scanlation</a> </p> <a href="https://publications.waset.org/abstracts/142602/comics-scanlation-and-publishing-houses-translation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142602.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">212</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">197</span> Multimodal Discourse Analysis of Egyptian Political Movies: A Case Study of 'People at the Top Ahl Al Kemma' Movie</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mariam%20Waheed%20Mekheimar">Mariam Waheed Mekheimar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Nascent research is conducted to the advancement of discourse analysis to include different modes as images, sound, and text. The focus of this study will be to elucidate how images are embedded with texts in an audio-visual medium as cinema to send political messages; it also seeks to broaden our understanding of politics beyond a relatively narrow conceptualization of the 'political' through studying non-traditional discourses as the cinematic discourse. The aim herein is to develop a systematic approach to film analysis to capture political meanings in films. The method adopted in this research is Multimodal Discourse Analysis (MDA) focusing on embedding visuals with texts. As today's era is the era of images and that necessitates analyzing images. Drawing on the writings of O'Halloran, Kress and Van Leuween, John Bateman and Janina Wildfeuer, different modalities will be studied to understand how those modes interact in the cinematic discourse. 'People at the top movie' is selected as an example to unravel the political meanings throughout film tackling the cinematic representation of the notion of social justice. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Egyptian%20cinema" title="Egyptian cinema">Egyptian cinema</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20discourse%20analysis" title=" multimodal discourse analysis"> multimodal discourse analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=people%20at%20the%20top" title=" people at the top"> people at the top</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20justice" title=" social justice "> social justice </a> </p> <a href="https://publications.waset.org/abstracts/70273/multimodal-discourse-analysis-of-egyptian-political-movies-a-case-study-of-people-at-the-top-ahl-al-kemma-movie" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70273.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">422</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">196</span> Multimodal Convolutional Neural Network for Musical Instrument Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yagya%20Raj%20Pandeya">Yagya Raj Pandeya</a>, <a href="https://publications.waset.org/abstracts/search?q=Joonwhoan%20Lee"> Joonwhoan Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The dynamic behavior of music and video makes it difficult to evaluate musical instrument playing in a video by computer system. Any television or film video clip with music information are rich sources for analyzing musical instruments using modern machine learning technologies. In this research, we integrate the audio and video information sources using convolutional neural network (CNN) and pass network learned features through recurrent neural network (RNN) to preserve the dynamic behaviors of audio and video. We use different pre-trained CNN for music and video feature extraction and then fine tune each model. The music network use 2D convolutional network and video network use 3D convolution (C3D). Finally, we concatenate each music and video feature by preserving the time varying features. The long short term memory (LSTM) network is used for long-term dynamic feature characterization and then use late fusion with generalized mean. The proposed network performs better performance to recognize the musical instrument using audio-video multimodal neural network. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20convolution" title=" 3D convolution"> 3D convolution</a>, <a href="https://publications.waset.org/abstracts/search?q=music-video%20feature%20extraction" title=" music-video feature extraction"> music-video feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=generalized%20mean" title=" generalized mean"> generalized mean</a> </p> <a href="https://publications.waset.org/abstracts/104041/multimodal-convolutional-neural-network-for-musical-instrument-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/104041.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">215</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">195</span> An Edusemiotic Approach to Multimodal Poetry Teaching for Afrikaans</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kruger%20Uys">Kruger Uys</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Poetry analysis plays a vital role in promoting critical thinking, literary appreciation, and language skills among learners. This paper proposes an innovative multimodal teaching approach that combines traditional textual analysis of poems with multimodal educational semiotic analysis of animated poetry films. The aim is to present a methodological framework through which poetry concepts and elements, along with the visual and auditory components in animated poetry films, can be comprehensively illuminated. Traditional textual analysis involves close reading, linguistic examination, and thematic exploration to identify, discuss, and apply poetry concepts. When combined with a multimodal edusemiotic analysis of the semiotic signs and codes present in animated poetry films, new perspectives emerge that enrich the interpretation of poetry. Furthermore, the proposed integrated approach, as prescribed by CAPS, enhances a holistic understanding of poetry terminology and elements, as well as complex linguistic and visual patterns that promote visual literacy, refined data interpretation skills, and learner engagement in the poetry classroom. To illustrate this phenomenon, the poem My mamma is bossies (My mom’s bonkers) by Jeanne Goosen (prescribed for Grade 10 Afrikaans Home Language learners in the CAPS curriculum) will be discussed. This study aims to contribute to the existing Afrikaans poetry curriculum but also equip all language educators to cultivate poetry appreciation, critical thinking, and creativity among learners in the ever-evolving landscape of education. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=edusemiotics" title="edusemiotics">edusemiotics</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodality" title=" multimodality"> multimodality</a>, <a href="https://publications.waset.org/abstracts/search?q=poetry%20education" title=" poetry education"> poetry education</a>, <a href="https://publications.waset.org/abstracts/search?q=animated%20poetry%20films" title=" animated poetry films"> animated poetry films</a> </p> <a href="https://publications.waset.org/abstracts/189264/an-edusemiotic-approach-to-multimodal-poetry-teaching-for-afrikaans" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/189264.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">24</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">194</span> Combined Optical Coherence Microscopy and Spectrally Resolved Multiphoton Microscopy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bjorn-Ole%20Meyer">Bjorn-Ole Meyer</a>, <a href="https://publications.waset.org/abstracts/search?q=Dominik%20Marti"> Dominik Marti</a>, <a href="https://publications.waset.org/abstracts/search?q=Peter%20E.%20Andersen"> Peter E. Andersen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A multimodal imaging system, combining spectrally resolved multiphoton microscopy (MPM) and optical coherence microscopy (OCM) is demonstrated. MPM and OCM are commonly integrated into multimodal imaging platforms to combine functional and morphological information. The MPM signals, such as two-photon fluorescence emission (TPFE) and signals created by second harmonic generation (SHG) are biomarkers which exhibit information on functional biological features such as the ratio of pyridine nucleotide (NAD(P)H) and flavin adenine dinucleotide (FAD) in the classification of cancerous tissue. While the spectrally resolved imaging allows for the study of biomarkers, using a spectrometer as a detector limits the imaging speed of the system significantly. To overcome those limitations, an OCM setup was added to the system, which allows for fast acquisition of structural information. Thus, after rapid imaging of larger specimens, navigation within the sample is possible. Subsequently, distinct features can be selected for further investigation using MPM. Additionally, by probing a different contrast, complementary information is obtained, and different biomarkers can be investigated. OCM images of tissue and cell samples are obtained, and distinctive features are evaluated using MPM to illustrate the benefits of the system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=optical%20coherence%20microscopy" title="optical coherence microscopy">optical coherence microscopy</a>, <a href="https://publications.waset.org/abstracts/search?q=multiphoton%20microscopy" title=" multiphoton microscopy"> multiphoton microscopy</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20imaging" title=" multimodal imaging"> multimodal imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=two-photon%20fluorescence%20emission" title=" two-photon fluorescence emission"> two-photon fluorescence emission</a> </p> <a href="https://publications.waset.org/abstracts/102337/combined-optical-coherence-microscopy-and-spectrally-resolved-multiphoton-microscopy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/102337.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">511</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">193</span> The Effect of Normal Cervical Sagittal Configuration in the Management of Cervicogenic Dizziness: A 1-Year Randomized Controlled Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Moustafa%20Ibrahim%20Moustafa">Moustafa Ibrahim Moustafa</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of this study was to determine the immediate and long term effects of a multimodal program, with the addition of cervical sagittal curve restoration and forward head correction, on severity of dizziness, disability, frequency of dizziness, and severity of cervical pain. 72 patients with cervicogenic dizziness, definite hypolordotic cervical spine, and forward head posture were randomized to experimental or a control group. Both groups received the multimodal program, additionally, the study group received the Denneroll cervical traction. All outcome measures were measured at three intervals. The general linear model indicated a significant group × time effects in favor of experimental group on measures of anterior head translation (F=329.4 P < .0005), cervical lordosis (F=293.7 P < .0005), severity of dizziness (F=262.1 P < .0005), disability (F=248.9 P < .0005), frequency of dizziness (F=53.9 P < .0005), and severity of cervical pain (F=350.1 P < .0005). The addition of Dennroll cervical traction to a multimodal program can positively affect dizziness management outcomes. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=randomized%20controlled%20trial" title="randomized controlled trial">randomized controlled trial</a>, <a href="https://publications.waset.org/abstracts/search?q=traction" title=" traction"> traction</a>, <a href="https://publications.waset.org/abstracts/search?q=dizziness" title=" dizziness"> dizziness</a>, <a href="https://publications.waset.org/abstracts/search?q=cervical" title=" cervical"> cervical</a> </p> <a href="https://publications.waset.org/abstracts/1499/the-effect-of-normal-cervical-sagittal-configuration-in-the-management-of-cervicogenic-dizziness-a-1-year-randomized-controlled-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1499.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">310</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>