CINXE.COM
Search results for: multimodal platforms
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: multimodal platforms</title> <meta name="description" content="Search results for: multimodal platforms"> <meta name="keywords" content="multimodal platforms"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="multimodal platforms" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="multimodal platforms"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1123</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: multimodal platforms</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1123</span> Identity Verification Based on Multimodal Machine Learning on Red Green Blue (RGB) Red Green Blue-Depth (RGB-D) Voice Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=LuoJiaoyang">LuoJiaoyang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu%20Hongyang"> Yu Hongyang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we experimented with a new approach to multimodal identification using RGB, RGB-D and voice data. The multimodal combination of RGB and voice data has been applied in tasks such as emotion recognition and has shown good results and stability, and it is also the same in identity recognition tasks. We believe that the data of different modalities can enhance the effect of the model through mutual reinforcement. We try to increase the three modalities on the basis of the dual modalities and try to improve the effectiveness of the network by increasing the number of modalities. We also implemented the single-modal identification system separately, tested the data of these different modalities under clean and noisy conditions, and compared the performance with the multimodal model. In the process of designing the multimodal model, we tried a variety of different fusion strategies and finally chose the fusion method with the best performance. The experimental results show that the performance of the multimodal system is better than that of the single modality, especially in dealing with noise, and the multimodal system can achieve an average improvement of 5%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=three%20modalities" title=" three modalities"> three modalities</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB-D" title=" RGB-D"> RGB-D</a>, <a href="https://publications.waset.org/abstracts/search?q=identity%20verification" title=" identity verification"> identity verification</a> </p> <a href="https://publications.waset.org/abstracts/163265/identity-verification-based-on-multimodal-machine-learning-on-red-green-blue-rgb-red-green-blue-depth-rgb-d-voice-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163265.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1122</span> TMIF: Transformer-Based Multi-Modal Interactive Fusion for Rumor Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiandong%20Lv">Jiandong Lv</a>, <a href="https://publications.waset.org/abstracts/search?q=Xingang%20Wang"> Xingang Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Cuiling%20Shao"> Cuiling Shao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The rapid development of social media platforms has made it one of the important news sources. While it provides people with convenient real-time communication channels, fake news and rumors are also spread rapidly through social media platforms, misleading the public and even causing bad social impact in view of the slow speed and poor consistency of artificial rumor detection. We propose an end-to-end rumor detection model-TIMF, which captures the dependencies between multimodal data based on the interactive attention mechanism, uses a transformer for cross-modal feature sequence mapping and combines hybrid fusion strategies to obtain decision results. This paper verifies two multi-modal rumor detection datasets and proves the superior performance and early detection performance of the proposed model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hybrid%20fusion" title="hybrid fusion">hybrid fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20fusion" title=" multimodal fusion"> multimodal fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=rumor%20detection" title=" rumor detection"> rumor detection</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20media" title=" social media"> social media</a>, <a href="https://publications.waset.org/abstracts/search?q=transformer" title=" transformer"> transformer</a> </p> <a href="https://publications.waset.org/abstracts/141806/tmif-transformer-based-multi-modal-interactive-fusion-for-rumor-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/141806.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">246</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1121</span> A Comparative Study on Multimodal Metaphors in Public Service Advertising of China and Germany</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Xing%20Lyu">Xing Lyu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multimodal metaphor promotes the further development and refinement of multimodal discourse study. Cultural aspects matter a lot not only in creating but also in comprehending multimodal metaphor. By analyzing the target domain and the source domain in 10 public service advertisements of China and Germany about environmental protection, this paper compares the source when the target is alike in each multimodal metaphor in order to seek similarities and differences across cultures. The findings are as follows: first, the multimodal metaphors center around three major topics: the earth crisis, consequences of environmental damage, and appeal for environmental protection; second, the multimodal metaphors mainly grounded in three universal conceptual metaphors which focused on high level is up; earth is mother and all lives are precious. However, there are five Chinese culture-specific multimodal metaphors which are not discovered in Germany ads: east is high leve; a purposeful life is a journey; a nation is a person; good is clean, and water is mother. Since metaphors are excellent instruments on studying ideology, this study can be helpful on intercultural/cross-cultural communication. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal%20metaphor" title="multimodal metaphor">multimodal metaphor</a>, <a href="https://publications.waset.org/abstracts/search?q=cultural%20aspects" title=" cultural aspects"> cultural aspects</a>, <a href="https://publications.waset.org/abstracts/search?q=public%20service%20advertising" title=" public service advertising"> public service advertising</a>, <a href="https://publications.waset.org/abstracts/search?q=cross-cultural%20communication" title=" cross-cultural communication"> cross-cultural communication</a> </p> <a href="https://publications.waset.org/abstracts/112889/a-comparative-study-on-multimodal-metaphors-in-public-service-advertising-of-china-and-germany" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112889.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">174</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1120</span> Transmedia and Platformized Political Discourse in a Growing Democracy: A Study of Nigeria’s 2023 General Elections</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tunde%20Ope-Davies">Tunde Ope-Davies</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Transmediality and platformization as online content-sharing protocols have continued to accentuate the growing impact of the unprecedented digital revolution across the world. The rapid transformation across all sectors as a result of this revolution has continued to spotlight the increasing importance of new media technologies in redefining and reshaping the rhythm and dynamics of our private and public discursive practices. Equally, social and political activities are being impacted daily through the creation and transmission of political discourse content through multi-channel platforms such as mobile telephone communication, social media networks and the internet. It has been observed that digital platforms have become central to the production, processing, and distribution of multimodal social data and cultural content. The platformization paradigm thus underpins our understanding of how digital platforms enhance the production and heterogenous distribution of media and cultural content through these platforms and how this process facilitates socioeconomic and political activities. The use of multiple digital platforms to share and transmit political discourse material synchronously and asynchronously has gained some exciting momentum in the last few years. Nigeria’s 2023 general elections amplified the usage of social media and other online platforms as tools for electioneering campaigns, socio-political mobilizations and civic engagement. The study, therefore, focuses on transmedia and platformed political discourse as a new strategy to promote political candidates and their manifesto in order to mobilize support and woo voters. This innovative transmedia digital discourse model involves a constellation of online texts and images transmitted through different online platforms almost simultaneously. The data for the study was extracted from the 2023 general elections campaigns in Nigeria between January- March 2023 through media monitoring, manual download and the use of software to harvest the online electioneering campaign material. I adopted a discursive-analytic qualitative technique with toolkits drawn from a computer-mediated multimodal discourse paradigm. The study maps the progressive development of digital political discourse in this young democracy. The findings also demonstrate the inevitable transformation of modern democratic practice through platform-dependent and transmedia political discourse. Political actors and media practitioners now deploy layers of social media network platforms to convey messages and mobilize supporters in order to aggregate and maximize the impact of their media campaign projects and audience reach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=social%20media" title="social media">social media</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20humanities" title=" digital humanities"> digital humanities</a>, <a href="https://publications.waset.org/abstracts/search?q=political%20discourse" title=" political discourse"> political discourse</a>, <a href="https://publications.waset.org/abstracts/search?q=platformized%20discourse" title=" platformized discourse"> platformized discourse</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20discourse" title=" multimodal discourse"> multimodal discourse</a> </p> <a href="https://publications.waset.org/abstracts/165066/transmedia-and-platformized-political-discourse-in-a-growing-democracy-a-study-of-nigerias-2023-general-elections" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165066.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">85</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1119</span> The Optimization of Decision Rules in Multimodal Decision-Level Fusion Scheme</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Andrey%20V.%20Timofeev">Andrey V. Timofeev</a>, <a href="https://publications.waset.org/abstracts/search?q=Dmitry%20V.%20Egorov"> Dmitry V. Egorov</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper introduces an original method of parametric optimization of the structure for multimodal decision-level fusion scheme which combines the results of the partial solution of the classification task obtained from assembly of the mono-modal classifiers. As a result, a multimodal fusion classifier which has the minimum value of the total error rate has been obtained. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=classification%20accuracy" title="classification accuracy">classification accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion%20solution" title=" fusion solution"> fusion solution</a>, <a href="https://publications.waset.org/abstracts/search?q=total%20error%20rate" title=" total error rate"> total error rate</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20fusion%20classifier" title=" multimodal fusion classifier"> multimodal fusion classifier</a> </p> <a href="https://publications.waset.org/abstracts/26088/the-optimization-of-decision-rules-in-multimodal-decision-level-fusion-scheme" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26088.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">466</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1118</span> Multimodal Data Fusion Techniques in Audiovisual Speech Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hadeer%20M.%20Sayed">Hadeer M. Sayed</a>, <a href="https://publications.waset.org/abstracts/search?q=Hesham%20E.%20El%20Deeb"> Hesham E. El Deeb</a>, <a href="https://publications.waset.org/abstracts/search?q=Shereen%20A.%20Taie"> Shereen A. Taie</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the big data era, we are facing a diversity of datasets from different sources in different domains that describe a single life event. These datasets consist of multiple modalities, each of which has a different representation, distribution, scale, and density. Multimodal fusion is the concept of integrating information from multiple modalities in a joint representation with the goal of predicting an outcome through a classification task or regression task. In this paper, multimodal fusion techniques are classified into two main classes: model-agnostic techniques and model-based approaches. It provides a comprehensive study of recent research in each class and outlines the benefits and limitations of each of them. Furthermore, the audiovisual speech recognition task is expressed as a case study of multimodal data fusion approaches, and the open issues through the limitations of the current studies are presented. This paper can be considered a powerful guide for interested researchers in the field of multimodal data fusion and audiovisual speech recognition particularly. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal%20data" title="multimodal data">multimodal data</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20fusion" title=" data fusion"> data fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=audio-visual%20speech%20recognition" title=" audio-visual speech recognition"> audio-visual speech recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a> </p> <a href="https://publications.waset.org/abstracts/157362/multimodal-data-fusion-techniques-in-audiovisual-speech-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157362.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">112</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1117</span> Combined Optical Coherence Microscopy and Spectrally Resolved Multiphoton Microscopy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bjorn-Ole%20Meyer">Bjorn-Ole Meyer</a>, <a href="https://publications.waset.org/abstracts/search?q=Dominik%20Marti"> Dominik Marti</a>, <a href="https://publications.waset.org/abstracts/search?q=Peter%20E.%20Andersen"> Peter E. Andersen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> A multimodal imaging system, combining spectrally resolved multiphoton microscopy (MPM) and optical coherence microscopy (OCM) is demonstrated. MPM and OCM are commonly integrated into multimodal imaging platforms to combine functional and morphological information. The MPM signals, such as two-photon fluorescence emission (TPFE) and signals created by second harmonic generation (SHG) are biomarkers which exhibit information on functional biological features such as the ratio of pyridine nucleotide (NAD(P)H) and flavin adenine dinucleotide (FAD) in the classification of cancerous tissue. While the spectrally resolved imaging allows for the study of biomarkers, using a spectrometer as a detector limits the imaging speed of the system significantly. To overcome those limitations, an OCM setup was added to the system, which allows for fast acquisition of structural information. Thus, after rapid imaging of larger specimens, navigation within the sample is possible. Subsequently, distinct features can be selected for further investigation using MPM. Additionally, by probing a different contrast, complementary information is obtained, and different biomarkers can be investigated. OCM images of tissue and cell samples are obtained, and distinctive features are evaluated using MPM to illustrate the benefits of the system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=optical%20coherence%20microscopy" title="optical coherence microscopy">optical coherence microscopy</a>, <a href="https://publications.waset.org/abstracts/search?q=multiphoton%20microscopy" title=" multiphoton microscopy"> multiphoton microscopy</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20imaging" title=" multimodal imaging"> multimodal imaging</a>, <a href="https://publications.waset.org/abstracts/search?q=two-photon%20fluorescence%20emission" title=" two-photon fluorescence emission"> two-photon fluorescence emission</a> </p> <a href="https://publications.waset.org/abstracts/102337/combined-optical-coherence-microscopy-and-spectrally-resolved-multiphoton-microscopy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/102337.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">511</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1116</span> OPEN-EmoRec-II-A Multimodal Corpus of Human-Computer Interaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Stefanie%20Rukavina">Stefanie Rukavina</a>, <a href="https://publications.waset.org/abstracts/search?q=Sascha%20Gruss"> Sascha Gruss</a>, <a href="https://publications.waset.org/abstracts/search?q=Steffen%20Walter"> Steffen Walter</a>, <a href="https://publications.waset.org/abstracts/search?q=Holger%20Hoffmann"> Holger Hoffmann</a>, <a href="https://publications.waset.org/abstracts/search?q=Harald%20C.%20Traue"> Harald C. Traue</a> </p> <p class="card-text"><strong>Abstract:</strong></p> OPEN-EmoRecII is an open multimodal corpus with experimentally induced emotions. In the first half of the experiment, emotions were induced with standardized picture material and in the second half during a human-computer interaction (HCI), realized with a wizard-of-oz design. The induced emotions are based on the dimensional theory of emotions (valence, arousal and dominance). These emotional sequences - recorded with multimodal data (mimic reactions, speech, audio and physiological reactions) during a naturalistic-like HCI-environment one can improve classification methods on a multimodal level. This database is the result of an HCI-experiment, for which 30 subjects in total agreed to a publication of their data including the video material for research purposes. The now available open corpus contains sensory signal of: video, audio, physiology (SCL, respiration, BVP, EMG Corrugator supercilii, EMG Zygomaticus Major) and mimic annotations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=open%20multimodal%20emotion%20corpus" title="open multimodal emotion corpus">open multimodal emotion corpus</a>, <a href="https://publications.waset.org/abstracts/search?q=annotated%20labels" title=" annotated labels"> annotated labels</a>, <a href="https://publications.waset.org/abstracts/search?q=intelligent%20interaction" title=" intelligent interaction"> intelligent interaction</a> </p> <a href="https://publications.waset.org/abstracts/29365/open-emorec-ii-a-multimodal-corpus-of-human-computer-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/29365.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">416</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1115</span> New Approach for Constructing a Secure Biometric Database</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=A.%20Kebbeb">A. Kebbeb</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Mostefai"> M. Mostefai</a>, <a href="https://publications.waset.org/abstracts/search?q=F.%20Benmerzoug"> F. Benmerzoug</a>, <a href="https://publications.waset.org/abstracts/search?q=Y.%20Chahir"> Y. Chahir</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The multimodal biometric identification is the combination of several biometric systems. The challenge of this combination is to reduce some limitations of systems based on a single modality while significantly improving performance. In this paper, we propose a new approach to the construction and the protection of a multimodal biometric database dedicated to an identification system. We use a topological watermarking to hide the relation between face image and the registered descriptors extracted from other modalities of the same person for more secure user identification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=biometric%20databases" title="biometric databases">biometric databases</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20biometrics" title=" multimodal biometrics"> multimodal biometrics</a>, <a href="https://publications.waset.org/abstracts/search?q=security%20authentication" title=" security authentication"> security authentication</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20watermarking" title=" digital watermarking"> digital watermarking</a> </p> <a href="https://publications.waset.org/abstracts/3126/new-approach-for-constructing-a-secure-biometric-database" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/3126.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">391</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1114</span> Teaching and Learning with Picturebooks: Developing Multimodal Literacy with a Community of Primary School Teachers in China</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fuling%20Deng">Fuling Deng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Today’s children are frequently exposed to multimodal texts that adopt diverse modes to communicate myriad meanings within different cultural contexts. To respond to the new textual landscape, scholars have considered new literacy theories which propose picturebooks as important educational resources. Picturebooks are multimodal, with their meaning conveyed through the synchronisation of multiple modes, including linguistic, visual, spatial, and gestural acting as access to multimodal literacy. Picturebooks have been popular reading materials in primary educational settings in China. However, often viewed as “easy” texts directed at the youngest readers, picturebooks remain on the margins of Chinese upper primary classrooms, where they are predominantly used for linguistic tasks, with little value placed on their multimodal affordances. Practices with picturebooks in the upper grades in Chinese primary schools also encounter many challenges associated with the curation of texts for use, designing curriculum, and assessment. To respond to these issues, a qualitative study was conducted with a community of Chinese primary teachers using multi-methods such as interviews, focus groups, and documents. The findings showed the impact of the teachers’ increased awareness of picturebooks' multimodal affordances on their pedagogical decisions in using picturebooks as educational resources in upper primary classrooms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=picturebook%20education" title="picturebook education">picturebook education</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20literacy" title=" multimodal literacy"> multimodal literacy</a>, <a href="https://publications.waset.org/abstracts/search?q=teachers%27%20response%20to%20contemporary%20picturebooks" title=" teachers' response to contemporary picturebooks"> teachers' response to contemporary picturebooks</a>, <a href="https://publications.waset.org/abstracts/search?q=community%20of%20practice" title=" community of practice"> community of practice</a> </p> <a href="https://publications.waset.org/abstracts/156547/teaching-and-learning-with-picturebooks-developing-multimodal-literacy-with-a-community-of-primary-school-teachers-in-china" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156547.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">136</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1113</span> Multimodal Content: Fostering Students’ Language and Communication Competences</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Victoria%20L.%20Malakhova">Victoria L. Malakhova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The research is devoted to multimodal content and its effectiveness in developing students’ linguistic and intercultural communicative competences as an indefeasible constituent of their future professional activity. Description of multimodal content both as a linguistic and didactic phenomenon makes the study relevant. The objective of the article is the analysis of creolized texts and the effect they have on fostering higher education students’ skills and their productivity. The main methods used are linguistic text analysis, qualitative and quantitative methods, deduction, generalization. The author studies texts with full and partial creolization, their features and role in composing multimodal textual space. The main verbal and non-verbal markers and paralinguistic means that enhance the linguo-pragmatic potential of creolized texts are covered. To reveal the efficiency of multimodal content application in English teaching, the author conducts an experiment among both undergraduate students and teachers. This allows specifying main functions of creolized texts in the process of language learning, detecting ways of enhancing students’ competences, and increasing their motivation. The described stages of using creolized texts can serve as an algorithm for work with multimodal content in teaching English as a foreign language. The findings contribute to improving the efficiency of the academic process. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=creolized%20text" title="creolized text">creolized text</a>, <a href="https://publications.waset.org/abstracts/search?q=English%20language%20learning" title=" English language learning"> English language learning</a>, <a href="https://publications.waset.org/abstracts/search?q=higher%20education" title=" higher education"> higher education</a>, <a href="https://publications.waset.org/abstracts/search?q=language%20and%20communication%20competences" title=" language and communication competences"> language and communication competences</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20content" title=" multimodal content"> multimodal content</a> </p> <a href="https://publications.waset.org/abstracts/151423/multimodal-content-fostering-students-language-and-communication-competences" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151423.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">112</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1112</span> A Proposal of Multi-modal Teaching Model for College English</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Huang%20Yajing">Huang Yajing</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multimodal discourse refers to the phenomenon of using various senses such as hearing, vision, and touch to communicate through various means and symbolic resources such as language, images, sounds, and movements. With the development of modern technology and multimedia, language and technology have become inseparable, and foreign language teaching is becoming more and more modal. Teacher-student communication resorts to multiple senses and uses multiple symbol systems to construct and interpret meaning. The classroom is a semiotic space where multimodal discourses are intertwined. College English multi-modal teaching is to rationally utilize traditional teaching methods while mobilizing and coordinating various modern teaching methods to form a joint force to promote teaching and learning. Multimodal teaching makes full and reasonable use of various meaning resources and can maximize the advantages of multimedia and network environments. Based upon the above theories about multimodal discourse and multimedia technology, the present paper will propose a multi-modal teaching model for college English in China. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal%20discourse" title="multimodal discourse">multimodal discourse</a>, <a href="https://publications.waset.org/abstracts/search?q=multimedia%20technology" title=" multimedia technology"> multimedia technology</a>, <a href="https://publications.waset.org/abstracts/search?q=English%20education" title=" English education"> English education</a>, <a href="https://publications.waset.org/abstracts/search?q=applied%20linguistics" title=" applied linguistics"> applied linguistics</a> </p> <a href="https://publications.waset.org/abstracts/183810/a-proposal-of-multi-modal-teaching-model-for-college-english" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183810.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">68</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1111</span> Exploring Multimodal Communication: Intersections of Language, Gesture, and Technology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rasha%20Ali%20Dheyab">Rasha Ali Dheyab</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In today's increasingly interconnected and technologically-driven world, communication has evolved beyond traditional verbal exchanges. This paper delves into the fascinating realm of multimodal communication, a dynamic field at the intersection of linguistics, gesture studies, and technology. The study of how humans convey meaning through a combination of spoken language, gestures, facial expressions, and digital platforms has gained prominence as our modes of interaction continue to diversify. This exploration begins by examining the foundational theories in linguistics and gesture studies, tracing their historical development and mutual influences. It further investigates the role of nonverbal cues, such as gestures and facial expressions, in augmenting and sometimes even altering the meanings conveyed by spoken language. Additionally, the paper delves into the modern technological landscape, where emojis, GIFs, and other digital symbols have emerged as new linguistic tools, reshaping the ways in which we communicate and express emotions. The interaction between traditional and digital modes of communication is a central focus of this study. The paper investigates how technology has not only introduced new modes of expression but has also influenced the adaptation of existing linguistic and gestural patterns in online discourse. The emergence of virtual reality and augmented reality environments introduces yet another layer of complexity to multimodal communication, offering new avenues for studying how humans navigate and negotiate meaning in immersive digital spaces. Through a combination of literature review, case studies, and theoretical analysis, this paper seeks to shed light on the intricate interplay between language, gesture, and technology in the realm of multimodal communication. By understanding how these diverse modes of expression intersect and interact, we gain valuable insights into the ever-evolving nature of human communication and its implications for fields ranging from linguistics and psychology to human-computer interaction and digital anthropology. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal%20communication" title="multimodal communication">multimodal communication</a>, <a href="https://publications.waset.org/abstracts/search?q=linguistics%20." title=" linguistics ."> linguistics .</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20studies." title=" gesture studies."> gesture studies.</a>, <a href="https://publications.waset.org/abstracts/search?q=emojis." title=" emojis."> emojis.</a>, <a href="https://publications.waset.org/abstracts/search?q=verbal%20communication." title=" verbal communication."> verbal communication.</a>, <a href="https://publications.waset.org/abstracts/search?q=digital" title=" digital"> digital</a> </p> <a href="https://publications.waset.org/abstracts/171534/exploring-multimodal-communication-intersections-of-language-gesture-and-technology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171534.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">81</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1110</span> An Exploration of Promoting EFL Students’ Language Learning Autonomy Using Multimodal Teaching - A Case Study of an Art University in Western China</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dian%20Guan">Dian Guan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the wide application of multimedia and the Internet, the development of teaching theories, and the implementation of teaching reforms, many different university English classroom teaching modes have emerged. The university English teaching mode is changing from the traditional teaching mode based on conversation and text to the multimodal English teaching mode containing discussion, pictures, audio, film, etc. Applying university English teaching models is conducive to cultivating lifelong learning skills. In addition, lifelong learning skills can also be called learners' autonomous learning skills. Learners' independent learning ability has a significant impact on English learning. However, many university students, especially art and design students, don't know how to learn individually. When they become university students, their English foundation is a relative deficiency because they always remember the language in a traditional way, which, to a certain extent, neglects the cultivation of English learners' independent ability. As a result, the autonomous learning ability of most university students is not satisfactory. The participants in this study were 60 students and one teacher in their first year at a university in western China. Two observations and interviews were conducted inside and outside the classroom to understand the impact of a multimodal teaching model of university English on students' autonomous learning ability. The results were analyzed, and it was found that the multimodal teaching model of university English significantly affected learners' autonomy. Incorporating classroom presentations and poster exhibitions into multimodal teaching can increase learners' interest in learning and enhance their learning ability outside the classroom. However, further exploration is needed to develop multimodal teaching materials and evaluate multimodal teaching outcomes. Despite the limitations of this study, the study adopts a scientific research method to analyze the impact of the multimodal teaching mode of university English on students' independent learning ability. It puts forward a different outlook for further research on this topic. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=art%20university" title="art university">art university</a>, <a href="https://publications.waset.org/abstracts/search?q=EFL%20education" title=" EFL education"> EFL education</a>, <a href="https://publications.waset.org/abstracts/search?q=learner%20autonomy" title=" learner autonomy"> learner autonomy</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20pedagogy" title=" multimodal pedagogy"> multimodal pedagogy</a> </p> <a href="https://publications.waset.org/abstracts/176520/an-exploration-of-promoting-efl-students-language-learning-autonomy-using-multimodal-teaching-a-case-study-of-an-art-university-in-western-china" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/176520.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">101</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1109</span> Multimodal Characterization of Emotion within Multimedia Space</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dayo%20Samuel%20Banjo">Dayo Samuel Banjo</a>, <a href="https://publications.waset.org/abstracts/search?q=Connice%20Trimmingham"> Connice Trimmingham</a>, <a href="https://publications.waset.org/abstracts/search?q=Niloofar%20Yousefi"> Niloofar Yousefi</a>, <a href="https://publications.waset.org/abstracts/search?q=Nitin%20Agarwal"> Nitin Agarwal</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Technological advancement and its omnipresent connection have pushed humans past the boundaries and limitations of a computer screen, physical state, or geographical location. It has provided a depth of avenues that facilitate human-computer interaction that was once inconceivable such as audio and body language detection. Given the complex modularities of emotions, it becomes vital to study human-computer interaction, as it is the commencement of a thorough understanding of the emotional state of users and, in the context of social networks, the producers of multimodal information. This study first acknowledges the accuracy of classification found within multimodal emotion detection systems compared to unimodal solutions. Second, it explores the characterization of multimedia content produced based on their emotions and the coherence of emotion in different modalities by utilizing deep learning models to classify emotion across different modalities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=affective%20computing" title="affective computing">affective computing</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title=" emotion recognition"> emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal" title=" multimodal"> multimodal</a> </p> <a href="https://publications.waset.org/abstracts/157830/multimodal-characterization-of-emotion-within-multimedia-space" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/157830.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1108</span> Optimizing Multimodal Teaching Strategies for Enhanced Engagement and Performance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Victor%20Milanes">Victor Milanes</a>, <a href="https://publications.waset.org/abstracts/search?q=Martha%20Hubertz"> Martha Hubertz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the wake of COVID-19, all aspects of life have been estranged, and humanity has been forced to shift toward a more technologically integrated mode of operation. Essential work such as Healthcare, business, and public policy are a few notable industries that were initially dependent upon face-to-face modality but have completely reimagined their operation style. Unique to these fields, education was particularly strained because academics, teachers, and professors alike were obligated to shift their curriculums online over the course of a few weeks while also maintaining the expectation that they were educating their students to a similar level accomplished pre-pandemic. This was notable as research indicates two key concepts: Students prefer face-to-face modality, and due to the disruption in academic continuity/style, there was a negative impact on student's overall education and performance. With these two principles in mind, this study aims to inquire what online strategies could be best employed by teachers to educate their students, as well as what strategies could be adopted in a multimodal setting if deemed necessary by the instructor or outside convoluting factors (Such as the case of COVID-19, or a personal matter that demands the teacher's attention away from the classroom). Strategies and methods will be cross-analyzed via a ranking system derived from various recognized teaching assessments, in which engagement, retention, flexibility, interest, and performance are specifically accounted for. We expect to see an emphasis on positive social pressure as a dominant factor in the improved propensity for education, as well as a preference for visual aids across platforms, as research indicates most individuals are visual learners. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=technological%20integration" title="technological integration">technological integration</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20teaching" title=" multimodal teaching"> multimodal teaching</a>, <a href="https://publications.waset.org/abstracts/search?q=education" title=" education"> education</a>, <a href="https://publications.waset.org/abstracts/search?q=student%20engagement" title=" student engagement"> student engagement</a> </p> <a href="https://publications.waset.org/abstracts/172961/optimizing-multimodal-teaching-strategies-for-enhanced-engagement-and-performance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/172961.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">61</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1107</span> Using Trip Planners in Developing Proper Transportation Behavior</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Grzegorz%20Sierpi%C5%84ski">Grzegorz Sierpiński</a>, <a href="https://publications.waset.org/abstracts/search?q=Ireneusz%20Celi%C5%84ski"> Ireneusz Celiński</a>, <a href="https://publications.waset.org/abstracts/search?q=Marcin%20Staniek"> Marcin Staniek</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The article discusses multi modal mobility in contemporary societies as a main planning and organization issue in the functioning of administrative bodies, a problem which really exists in the space of contemporary cities in terms of shaping modern transport systems. The article presents classification of available resources and initiatives undertaken for developing multi modal mobility. Solutions can be divided into three groups of measures–physical measures in the form of changes of the transport network infrastructure, organizational ones (including transport policy) and information measures. The latter ones include in particular direct support for people travelling in the transport network by providing information about ways of using available means of transport. A special measure contributing to this end is a trip planner. The article compares several selected planners. It includes a short description of the Green Travelling Project, which aims at developing a planner supporting environmentally friendly solutions in terms of transport network operation. The article summarizes preliminary findings of the project. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mobility" title="mobility">mobility</a>, <a href="https://publications.waset.org/abstracts/search?q=modal%20split" title=" modal split"> modal split</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20trip" title=" multimodal trip"> multimodal trip</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20platforms" title=" multimodal platforms"> multimodal platforms</a>, <a href="https://publications.waset.org/abstracts/search?q=sustainable%20transport" title=" sustainable transport"> sustainable transport</a> </p> <a href="https://publications.waset.org/abstracts/15575/using-trip-planners-in-developing-proper-transportation-behavior" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/15575.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">411</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1106</span> Seismic Hazard Assessment of Offshore Platforms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=F.%20D.%20Konstandakopoulou">F. D. Konstandakopoulou</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20A.%20Papagiannopoulos"> G. A. Papagiannopoulos</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20G.%20Pnevmatikos"> N. G. Pnevmatikos</a>, <a href="https://publications.waset.org/abstracts/search?q=G.%20D.%20Hatzigeorgiou"> G. D. Hatzigeorgiou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper examines the effects of pile-soil-structure interaction on the dynamic response of offshore platforms under the action of near-fault earthquakes. Two offshore platforms models are investigated, one with completely fixed supports and one with piles which are clamped into deformable layered soil. The soil deformability for the second model is simulated using non-linear springs. These platform models are subjected to near-fault seismic ground motions. The role of fault mechanism on platforms’ response is additionally investigated, while the study also examines the effects of different angles of incidence of seismic records on the maximum response of each platform. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hazard%20analysis" title="hazard analysis">hazard analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=offshore%20platforms" title=" offshore platforms"> offshore platforms</a>, <a href="https://publications.waset.org/abstracts/search?q=earthquakes" title=" earthquakes"> earthquakes</a>, <a href="https://publications.waset.org/abstracts/search?q=safety" title=" safety"> safety</a> </p> <a href="https://publications.waset.org/abstracts/102575/seismic-hazard-assessment-of-offshore-platforms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/102575.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">148</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1105</span> Multimodal Sentiment Analysis With Web Based Application</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shreyansh%20Singh">Shreyansh Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Afroz%20Ahmed"> Afroz Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Sentiment Analysis intends to naturally reveal the hidden mentality that we hold towards an entity. The total of this assumption over a populace addresses sentiment surveying and has various applications. Current text-based sentiment analysis depends on the development of word embeddings and Machine Learning models that take in conclusion from enormous text corpora. Sentiment Analysis from text is presently generally utilized for consumer loyalty appraisal and brand insight investigation. With the expansion of online media, multimodal assessment investigation is set to carry new freedoms with the appearance of integral information streams for improving and going past text-based feeling examination using the new transforms methods. Since supposition can be distinguished through compelling follows it leaves, like facial and vocal presentations, multimodal opinion investigation offers good roads for examining facial and vocal articulations notwithstanding the record or printed content. These methodologies use the Recurrent Neural Networks (RNNs) with the LSTM modes to increase their performance. In this study, we characterize feeling and the issue of multimodal assessment investigation and audit ongoing advancements in multimodal notion examination in various spaces, including spoken surveys, pictures, video websites, human-machine, and human-human connections. Difficulties and chances of this arising field are additionally examined, promoting our theory that multimodal feeling investigation holds critical undiscovered potential. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=sentiment%20analysis" title="sentiment analysis">sentiment analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=RNN" title=" RNN"> RNN</a>, <a href="https://publications.waset.org/abstracts/search?q=LSTM" title=" LSTM"> LSTM</a>, <a href="https://publications.waset.org/abstracts/search?q=word%20embeddings" title=" word embeddings"> word embeddings</a> </p> <a href="https://publications.waset.org/abstracts/150082/multimodal-sentiment-analysis-with-web-based-application" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/150082.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">119</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1104</span> Determinants of Customer Value in Online Retail Platforms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mikko%20H%C3%A4nninen">Mikko Hänninen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper explores the effect online retail platforms have on customer behavior and retail patronage through an inductive multi-case study. Existing research on retail platforms and ecosystems generally focus on competition between platform members and most papers maintain a managerial perspective with customers seen mainly as merely one stakeholder of the value-exchange relationship. It is proposed that retail platforms change the nature of customer relationships compared to traditional brick-and-mortar or e-commerce retailers. With online retail platforms such as Alibaba, Amazon and Rakuten gaining increasing traction with their platform based business models, the purpose of this paper is to define retail platforms and look at how leading retail platforms are able to create value for their customers, in order to foster meaningful customer’ relationships. An analysis is conducted on the major global retail platforms with a focus specifically on understanding the tools in place for creating customer value in order to show how retail platforms create and maintain customer relationships for fostering customer loyalty. The results describe the opportunities and challenges retailers face when competing against platform based businesses and outline the advantages as well as disadvantages that platforms bring to individual consumers. Based on the inductive case research approach, five theoretical propositions on consumer behavior in online retail platforms are developed that also form the basis of further research with this research making both a practical as well as theoretical contribution to platform research streams. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retail" title="retail">retail</a>, <a href="https://publications.waset.org/abstracts/search?q=platform" title=" platform"> platform</a>, <a href="https://publications.waset.org/abstracts/search?q=ecosystem" title=" ecosystem"> ecosystem</a>, <a href="https://publications.waset.org/abstracts/search?q=e-commerce" title=" e-commerce"> e-commerce</a>, <a href="https://publications.waset.org/abstracts/search?q=loyalty" title=" loyalty"> loyalty</a> </p> <a href="https://publications.waset.org/abstracts/56838/determinants-of-customer-value-in-online-retail-platforms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56838.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">283</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1103</span> Integrating Critical Stylistics and Visual Grammar: A Multimodal Stylistic Approach to the Analysis of Non-Literary Texts</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shatha%20Khuzaee">Shatha Khuzaee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The study develops multimodal stylistic approach to analyse a number of BBC online news articles reporting some key events from the so called ‘Arab Uprisings’. Critical stylistics (CS) and visual grammar (VG) provide insightful arguments to the ways ideology is projected through different verbal and visual modes, yet they are mode specific because they examine how each mode projects its meaning separately and do not attempt to clarify what happens intersemiotically when the two modes co-occur. Therefore, it is the task undertaken in this research to propose multimodal stylistic approach that addresses the issue of ideology construction when the two modes co-occur. Informed by functional grammar and social semiotics, the analysis attempts to integrate three linguistic models developed in critical stylistics, namely, transitivity choices, prioritizing and hypothesizing along with their visual equivalents adopted from visual grammar to investigate the way ideology is constructed, in multimodal text, when text/image participate and interrelate in the process of meaning making on the textual level of analysis. The analysis provides comprehensive theoretical and analytical elaborations on the different points of integration between CS linguistic models and VG equivalents which operate on the textual level of analysis to better account for ideology construction in news as non-literary multimodal texts. It is argued that the analysis well thought out a plan that would remark the first step towards the integration between the well-established linguistic models of critical stylistics and that of visual analysis to analyse multimodal texts on the textual level. Both approaches are compatible to produce multimodal stylistic approach because they intend to analyse text and image depending on whatever textual evidence is available. This supports the analysis maintain the rigor and replicability needed for a stylistic analysis like the one undertaken in this study. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodality" title="multimodality">multimodality</a>, <a href="https://publications.waset.org/abstracts/search?q=stylistics" title=" stylistics"> stylistics</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20grammar" title=" visual grammar"> visual grammar</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20semiotics" title=" social semiotics"> social semiotics</a>, <a href="https://publications.waset.org/abstracts/search?q=functional%20grammar" title=" functional grammar"> functional grammar</a> </p> <a href="https://publications.waset.org/abstracts/77486/integrating-critical-stylistics-and-visual-grammar-a-multimodal-stylistic-approach-to-the-analysis-of-non-literary-texts" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/77486.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">221</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1102</span> Social Learning and the Flipped Classroom</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Albin%20Wallace">Albin Wallace</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper examines the use of social learning platforms in conjunction with the emergent pedagogy of the ‘flipped classroom’. In particular the attributes of the social learning platform “Edmodo” is considered alongside the changes in the way in which online learning environments are being implemented, especially within British education. Some observations are made regarding the use and usefulness of these platforms along with a consideration of the increasingly decentralized nature of education in the United Kingdom. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=education" title="education">education</a>, <a href="https://publications.waset.org/abstracts/search?q=Edmodo" title=" Edmodo"> Edmodo</a>, <a href="https://publications.waset.org/abstracts/search?q=Internet" title=" Internet"> Internet</a>, <a href="https://publications.waset.org/abstracts/search?q=learning%20platforms" title=" learning platforms "> learning platforms </a> </p> <a href="https://publications.waset.org/abstracts/14705/social-learning-and-the-flipped-classroom" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/14705.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">544</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1101</span> Two Weeks of Multi-Modal Inpatient Treatment: Patients Suffering from Chronic Musculoskeletal Pain for over 12 Months</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=D.%20Schafer">D. Schafer</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20Booke"> H. Booke</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20Nordmeier"> R. Nordmeier</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Patients suffering from chronic musculoskeletal pain ( > 12 months) are a challenging clientele for pain specialists. A multimodal approach, characterized by a two weeks inpatient treatment, often is the ultimate therapeutic attempt. The lasting effects of such a multimodal approach were analyzed, especially since two weeks of inpatient therapy, although very intense, often seem too short to make a difference in patients suffering from chronic pain for years. The study includes 32 consecutive patients suffering from chronic pain over years who underwent a two weeks multimodal inpatient treatment of pain. Twelve months after discharge, each patient was interviewed to objectify any lasting effects. Pain was measured on admission and 12 months after discharge using the numeric rating scale (NRS). For statistics, a paired students' t-test was used. Significance was defined as p < 0.05. The average intensity of pain on admission was 8,6 on the NRS. Twelve months after discharge, the intensity of pain was still reduced by an average of 48% (average NRS 4,4), p < 0.05. Despite this significant improvement in pain severity, two thirds (66%) of the patients still judge their treatment as not sufficient. In conclusion, inpatient treatment of chronic pain has a long-lasting effect on the intensity of pain in patients suffering from chronic musculoskeletal pain for more than 12 months. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chronic%20pain" title="chronic pain">chronic pain</a>, <a href="https://publications.waset.org/abstracts/search?q=inpatient%20treatment" title=" inpatient treatment"> inpatient treatment</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20pain%20treatment" title=" multimodal pain treatment"> multimodal pain treatment</a>, <a href="https://publications.waset.org/abstracts/search?q=musculoskeletal%20pain" title=" musculoskeletal pain"> musculoskeletal pain</a> </p> <a href="https://publications.waset.org/abstracts/130697/two-weeks-of-multi-modal-inpatient-treatment-patients-suffering-from-chronic-musculoskeletal-pain-for-over-12-months" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130697.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">165</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1100</span> Navigating the Case-Based Learning Multimodal Learning Environment: A Qualitative Study Across the First-Year Medical Students</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bhavani%20Veasuvalingam">Bhavani Veasuvalingam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Case-based learning (CBL) is a popular instructional method aimed to bridge theory to clinical practice. This study aims to explore CBL mixed modality curriculum in influencing students’ learning styles and strategies that support learning. An explanatory sequential mixed method study was employed with initial phase, 44-itemed Felderman’s Index of Learning Style (ILS) questionnaire employed across year one medical students (n=142) using convenience sampling to describe the preferred learning styles. The qualitative phase utilised three focus group discussions (FGD) to explore in depth on the multimodal learning style exhibited by the students. Most students preferred combination of learning stylesthat is reflective, sensing, visual and sequential i.e.: RSVISeq style (24.64%) from the ILS analysis. The frequency of learning preference from processing to understanding were well balanced, with sequential-global domain (66.2%); sensing-intuitive (59.86%), active- reflective (57%), and visual-verbal (51.41%). The qualitative data reported three major themes, namely Theme 1: CBL mixed modalities navigates learners’ learning style; Theme 2: Multimodal learners active learning strategies supports learning. Theme 3: CBL modalities facilitating theory into clinical knowledge. Both quantitative and qualitative study strongly reports the multimodal learning style of the year one medical students. Medical students utilise multimodal learning styles to attain the clinical knowledge when learning with CBL mixed modalities. Educators’ awareness of the multimodal learning style is crucial in delivering the CBL mixed modalities effectively, considering strategic pedagogical support students to engage and learn CBL in bridging the theoretical knowledge into clinical practice. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=case-based%20learning" title="case-based learning">case-based learning</a>, <a href="https://publications.waset.org/abstracts/search?q=learnign%20style" title=" learnign style"> learnign style</a>, <a href="https://publications.waset.org/abstracts/search?q=medical%20students" title=" medical students"> medical students</a>, <a href="https://publications.waset.org/abstracts/search?q=learning" title=" learning"> learning</a> </p> <a href="https://publications.waset.org/abstracts/151162/navigating-the-case-based-learning-multimodal-learning-environment-a-qualitative-study-across-the-first-year-medical-students" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151162.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">95</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1099</span> Analysing Techniques for Fusing Multimodal Data in Predictive Scenarios Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Philipp%20Ruf">Philipp Ruf</a>, <a href="https://publications.waset.org/abstracts/search?q=Massiwa%20Chabbi"> Massiwa Chabbi</a>, <a href="https://publications.waset.org/abstracts/search?q=Christoph%20Reich"> Christoph Reich</a>, <a href="https://publications.waset.org/abstracts/search?q=Djaffar%20Ould-Abdeslam"> Djaffar Ould-Abdeslam</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, convolutional neural networks (CNN) have demonstrated high performance in image analysis, but oftentimes, there is only structured data available regarding a specific problem. By interpreting structured data as images, CNNs can effectively learn and extract valuable insights from tabular data, leading to improved predictive accuracy and uncovering hidden patterns that may not be apparent in traditional structured data analysis. In applying a single neural network for analyzing multimodal data, e.g., both structured and unstructured information, significant advantages in terms of time complexity and energy efficiency can be achieved. Converting structured data into images and merging them with existing visual material offers a promising solution for applying CNN in multimodal datasets, as they often occur in a medical context. By employing suitable preprocessing techniques, structured data is transformed into image representations, where the respective features are expressed as different formations of colors and shapes. In an additional step, these representations are fused with existing images to incorporate both types of information. This final image is finally analyzed using a CNN. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CNN" title="CNN">CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a>, <a href="https://publications.waset.org/abstracts/search?q=tabular%20data" title=" tabular data"> tabular data</a>, <a href="https://publications.waset.org/abstracts/search?q=mixed%20dataset" title=" mixed dataset"> mixed dataset</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20transformation" title=" data transformation"> data transformation</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20fusion" title=" multimodal fusion"> multimodal fusion</a> </p> <a href="https://publications.waset.org/abstracts/171840/analysing-techniques-for-fusing-multimodal-data-in-predictive-scenarios-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171840.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">123</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1098</span> Dual Biometrics Fusion Based Recognition System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Prakash">Prakash</a>, <a href="https://publications.waset.org/abstracts/search?q=Vikash%20Kumar"> Vikash Kumar</a>, <a href="https://publications.waset.org/abstracts/search?q=Vinay%20Bansal"> Vinay Bansal</a>, <a href="https://publications.waset.org/abstracts/search?q=L.%20N.%20Das"> L. N. Das</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Dual biometrics is a subpart of multimodal biometrics, which refers to the use of a variety of modalities to identify and authenticate persons rather than just one. We limit the risks of mistakes by mixing several modals, and hackers have a tiny possibility of collecting information. Our goal is to collect the precise characteristics of iris and palmprint, produce a fusion of both methodologies, and ensure that authentication is only successful when the biometrics match a particular user. After combining different modalities, we created an effective strategy with a mean DI and EER of 2.41 and 5.21, respectively. A biometric system has been proposed. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=fusion" title=" fusion"> fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=palmprint" title=" palmprint"> palmprint</a>, <a href="https://publications.waset.org/abstracts/search?q=Iris" title=" Iris"> Iris</a>, <a href="https://publications.waset.org/abstracts/search?q=EER" title=" EER"> EER</a>, <a href="https://publications.waset.org/abstracts/search?q=DI" title=" DI"> DI</a> </p> <a href="https://publications.waset.org/abstracts/149996/dual-biometrics-fusion-based-recognition-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/149996.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">147</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1097</span> Aerodynamics of Spherical Combat Platform Levitation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aelina%20Franz">Aelina Franz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In recent years, the scientific community has witnessed a paradigm shift in the exploration of unconventional levitation methods, particularly in the domain of spherical combat platforms. This paper explores aerodynamics and levitational dynamics inherent in these spheres by examining interactions at the quantum level. Our research unravels the nuanced aerodynamic phenomena governing the levitation of spherical combat platforms. Through an analysis of the quantum fluid dynamics surrounding these spheres, we reveal the crucial interactions between air resistance, surface irregularities, and the quantum fluctuations that influence their levitational behavior. Our findings challenge conventional understanding, providing a perspective on the aerodynamic forces at play during the levitation of spherical combat platforms. Furthermore, we propose design modifications and control strategies informed by both classical aerodynamics and quantum information processing principles. These advancements not only enhance the stability and maneuverability of the combat platforms but also open new avenues for exploration in the interdisciplinary realm of engineering and quantum information sciences. This paper aims to contribute to levitation technologies and their applications in the field of spherical combat platforms. We anticipate that our work will stimulate further research to create a deeper understanding of aerodynamics and quantum phenomena in unconventional levitation systems. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=spherical%20combat%20platforms" title="spherical combat platforms">spherical combat platforms</a>, <a href="https://publications.waset.org/abstracts/search?q=levitation%20technologies" title=" levitation technologies"> levitation technologies</a>, <a href="https://publications.waset.org/abstracts/search?q=aerodynamics" title=" aerodynamics"> aerodynamics</a>, <a href="https://publications.waset.org/abstracts/search?q=maneuverable%20platforms" title=" maneuverable platforms"> maneuverable platforms</a> </p> <a href="https://publications.waset.org/abstracts/183818/aerodynamics-of-spherical-combat-platform-levitation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183818.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">57</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1096</span> Characteristics of Business Models of Industrial-Internet-of-Things Platforms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Peter%20Kress">Peter Kress</a>, <a href="https://publications.waset.org/abstracts/search?q=Alexander%20Pflaum"> Alexander Pflaum</a>, <a href="https://publications.waset.org/abstracts/search?q=Ulrich%20Loewen"> Ulrich Loewen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The number of Internet-of-Things (IoT) platforms is steadily increasing across various industries, especially for smart factories, smart homes and smart mobility. Also in the manufacturing industry, the number of Industrial-IoT platforms is growing. Both IT players, start-ups and increasingly also established industry players and small-and-medium-enterprises introduce offerings for the connection of industrial equipment on platforms, enabled by advanced information and communication technology. Beside the offered functionalities, the established ecosystem of partners around a platform is one of the key differentiators to generate a competitive advantage. The key question is how platform operators design the business model around their platform to attract a high number of customers and partners to co-create value for the entire ecosystem. The present research tries to answer this question by determining the key characteristics of business models of successful platforms in the manufacturing industry. To achieve that, the authors selected an explorative qualitative research approach and created an inductive comparative case study. The authors generated valuable descriptive insights of the business model elements (e.g., value proposition, pricing model or partnering model) of various established platforms. Furthermore, patterns across the various cases were identified to derive propositions for the successful design of business models of platforms in the manufacturing industry. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=industrial-internet-of-things" title="industrial-internet-of-things">industrial-internet-of-things</a>, <a href="https://publications.waset.org/abstracts/search?q=business%20models" title=" business models"> business models</a>, <a href="https://publications.waset.org/abstracts/search?q=platforms" title=" platforms"> platforms</a>, <a href="https://publications.waset.org/abstracts/search?q=ecosystems" title=" ecosystems"> ecosystems</a>, <a href="https://publications.waset.org/abstracts/search?q=case%20study" title=" case study"> case study</a> </p> <a href="https://publications.waset.org/abstracts/53010/characteristics-of-business-models-of-industrial-internet-of-things-platforms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53010.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">243</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1095</span> A Multimodal Approach to Improve the Performance of Biometric System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chander%20Kant">Chander Kant</a>, <a href="https://publications.waset.org/abstracts/search?q=Arun%20Kumar"> Arun Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Biometric systems automatically recognize an individual based on his/her physiological and behavioral characteristics. There are also some traits like weight, age, height etc. that may not provide reliable user recognition because of there common and temporary nature. These traits are called soft bio metric traits. Although soft bio metric traits are lack of permanence to uniquely and reliably identify an individual, yet they provide some beneficial evidence about the user identity and may improve the system performance. Here in this paper, we have proposed an approach for integrating the soft bio metrics with fingerprint and face to improve the performance of personal authentication system. In our approach we have proposed a combined architecture of three different sensors to elevate the system performance. The approach includes, soft bio metrics, fingerprint and face traits. We have also proven the efficiency of proposed system regarding FAR (False Acceptance Ratio) and total response time, with the help of MUBI (Multimodal Bio metrics Integration) software. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=FAR" title="FAR">FAR</a>, <a href="https://publications.waset.org/abstracts/search?q=minutiae%20point" title=" minutiae point"> minutiae point</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodal%20bio%20metrics" title=" multimodal bio metrics"> multimodal bio metrics</a>, <a href="https://publications.waset.org/abstracts/search?q=primary%20bio%20metric" title=" primary bio metric"> primary bio metric</a>, <a href="https://publications.waset.org/abstracts/search?q=soft%20bio%20metric" title=" soft bio metric"> soft bio metric</a> </p> <a href="https://publications.waset.org/abstracts/12625/a-multimodal-approach-to-improve-the-performance-of-biometric-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12625.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">346</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1094</span> Filmic and Verbal Metafphors</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manana%20Rusieshvili">Manana Rusieshvili</a>, <a href="https://publications.waset.org/abstracts/search?q=Rusudan%20Dolidze"> Rusudan Dolidze</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims at 1) investigating the ways in which a traditional, monomodal written verbal metaphor can be transposed as a monomodal non-verbal (visual) or multimodal (aural and -visual) filmic metaphor ; 2) exploring similarities and differences in the process of encoding and decoding of monomodal and multimodal metaphors. The empiric data, on which the research is based, embrace three sources: the novel by Harry Gray ‘The Hoods’, the script of the film ‘Once Upon a Time in America’ (English version by David Mills) and the resultant film by Sergio Leone. In order to achieve the above mentioned goals, the research focuses on the following issues: 1) identification of verbal and non-verbal monomodal and multimodal metaphors in the above-mentioned sources and 2) investigation of the ways and modes the specific written monomodal metaphors appearing in the novel and the script are enacted in the film and become visual, aural or visual-aural filmic metaphors ; 3) study of the factors which play an important role in contributing to the encoding and decoding of the filmic metaphor. The collection and analysis of the data were carried out in two stages: firstly, the relevant data, i.e. the monomodal metaphors from the novel, the script and the film were identified and collected. In the second, final stage the metaphors taken from all of the three sources were analysed, compared and two types of phenomena were selected for discussion: (1) the monomodal written metaphors found in the novel and/or in the script which become monomodal visual/aural metaphors in the film; (2) the monomodal written metaphors found in the novel and/or in the script which become multimodal, filmic (visual-aural) metaphors in the film. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=encoding" title="encoding">encoding</a>, <a href="https://publications.waset.org/abstracts/search?q=decoding" title=" decoding"> decoding</a>, <a href="https://publications.waset.org/abstracts/search?q=filmic%20metaphor" title=" filmic metaphor"> filmic metaphor</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodality" title=" multimodality"> multimodality</a> </p> <a href="https://publications.waset.org/abstracts/24927/filmic-and-verbal-metafphors" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/24927.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">526</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal%20platforms&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal%20platforms&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal%20platforms&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal%20platforms&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal%20platforms&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal%20platforms&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal%20platforms&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal%20platforms&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal%20platforms&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal%20platforms&page=37">37</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal%20platforms&page=38">38</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=multimodal%20platforms&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>