CINXE.COM

Search results for: computer human interaction

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: computer human interaction</title> <meta name="description" content="Search results for: computer human interaction"> <meta name="keywords" content="computer human interaction"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="computer human interaction" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="computer human interaction"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 13527</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: computer human interaction</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13467</span> Hand Detection and Recognition for Malay Sign Language</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Noah%20A.%20Rahman">Mohd Noah A. Rahman</a>, <a href="https://publications.waset.org/abstracts/search?q=Afzaal%20H.%20Seyal"> Afzaal H. Seyal</a>, <a href="https://publications.waset.org/abstracts/search?q=Norhafilah%20Bara"> Norhafilah Bara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Developing a software application using an interface with computers and peripheral devices using gestures of human body such as hand movements keeps growing in interest. A review on this hand gesture detection and recognition based on computer vision technique remains a very challenging task. This is to provide more natural, innovative and sophisticated way of non-verbal communication, such as sign language, in human computer interaction. Nevertheless, this paper explores hand detection and hand gesture recognition applying a vision based approach. The hand detection and recognition used skin color spaces such as HSV and YCrCb are applied. However, there are limitations that are needed to be considered. Almost all of skin color space models are sensitive to quickly changing or mixed lighting circumstances. There are certain restrictions in order for the hand recognition to give better results such as the distance of user’s hand to the webcam and the posture and size of the hand. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hand%20detection" title="hand detection">hand detection</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture" title=" hand gesture"> hand gesture</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20recognition" title=" hand recognition"> hand recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=sign%20language" title=" sign language"> sign language</a> </p> <a href="https://publications.waset.org/abstracts/46765/hand-detection-and-recognition-for-malay-sign-language" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46765.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">306</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13466</span> Interactive Shadow Play Animation System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bo%20Wan">Bo Wan</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiu%20Wen"> Xiu Wen</a>, <a href="https://publications.waset.org/abstracts/search?q=Lingling%20An"> Lingling An</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaoling%20Ding"> Xiaoling Ding</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The paper describes a Chinese shadow play animation system based on Kinect. Users, without any professional training, can personally manipulate the shadow characters to finish a shadow play performance by their body actions and get a shadow play video through giving the record command to our system if they want. In our system, Kinect is responsible for capturing human movement and voice commands data. Gesture recognition module is used to control the change of the shadow play scenes. After packaging the data from Kinect and the recognition result from gesture recognition module, VRPN transmits them to the server-side. At last, the server-side uses the information to control the motion of shadow characters and video recording. This system not only achieves human-computer interaction, but also realizes the interaction between people. It brings an entertaining experience to users and easy to operate for all ages. Even more important is that the application background of Chinese shadow play embodies the protection of the art of shadow play animation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hadow%20play%20animation" title="hadow play animation">hadow play animation</a>, <a href="https://publications.waset.org/abstracts/search?q=Kinect" title=" Kinect"> Kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=VRPN" title=" VRPN"> VRPN</a>, <a href="https://publications.waset.org/abstracts/search?q=HCI" title=" HCI"> HCI</a> </p> <a href="https://publications.waset.org/abstracts/19293/interactive-shadow-play-animation-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/19293.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">401</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13465</span> A Transformer-Based Approach for Multi-Human 3D Pose Estimation Using Color and Depth Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qiang%20Wang">Qiang Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongyang%20Yu"> Hongyang Yu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multi-human 3D pose estimation is a challenging task in computer vision, which aims to recover the 3D joint locations of multiple people from multi-view images. In contrast to traditional methods, which typically only use color (RGB) images as input, our approach utilizes both color and depth (D) information contained in RGB-D images. We also employ a transformer-based model as the backbone of our approach, which is able to capture long-range dependencies and has been shown to perform well on various sequence modeling tasks. Our method is trained and tested on the Carnegie Mellon University (CMU) Panoptic dataset, which contains a diverse set of indoor and outdoor scenes with multiple people in varying poses and clothing. We evaluate the performance of our model on the standard 3D pose estimation metrics of mean per-joint position error (MPJPE). Our results show that the transformer-based approach outperforms traditional methods and achieves competitive results on the CMU Panoptic dataset. We also perform an ablation study to understand the impact of different design choices on the overall performance of the model. In summary, our work demonstrates the effectiveness of using a transformer-based approach with RGB-D images for multi-human 3D pose estimation and has potential applications in real-world scenarios such as human-computer interaction, robotics, and augmented reality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-human%203D%20pose%20estimation" title="multi-human 3D pose estimation">multi-human 3D pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB-D%20images" title=" RGB-D images"> RGB-D images</a>, <a href="https://publications.waset.org/abstracts/search?q=transformer" title=" transformer"> transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20joint%20locations" title=" 3D joint locations"> 3D joint locations</a> </p> <a href="https://publications.waset.org/abstracts/162957/a-transformer-based-approach-for-multi-human-3d-pose-estimation-using-color-and-depth-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162957.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">80</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13464</span> On the Problems of Human Concept Learning within Terminological Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Farshad%20Badie">Farshad Badie</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The central focus of this article is on the fact that knowledge is constructed from an interaction between humans’ experiences and over their conceptions of constructed concepts. Logical characterisation of ‘human inductive learning over human’s constructed concepts’ within terminological systems and providing a logical background for theorising over the Human Concept Learning Problem (HCLP) in terminological systems are the main contributions of this research. This research connects with the topics ‘human learning’, ‘epistemology’, ‘cognitive modelling’, ‘knowledge representation’ and ‘ontological reasoning’. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20concept%20learning" title="human concept learning">human concept learning</a>, <a href="https://publications.waset.org/abstracts/search?q=concept%20construction" title=" concept construction"> concept construction</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge%20construction" title=" knowledge construction"> knowledge construction</a>, <a href="https://publications.waset.org/abstracts/search?q=terminological%20systems" title=" terminological systems"> terminological systems</a> </p> <a href="https://publications.waset.org/abstracts/54379/on-the-problems-of-human-concept-learning-within-terminological-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54379.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">325</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13463</span> The Analyzer: Clustering Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human Computer Interaction</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dona%20Shaini%20Abhilasha%20Nanayakkara">Dona Shaini Abhilasha Nanayakkara</a>, <a href="https://publications.waset.org/abstracts/search?q=Kurugamage%20Jude%20Pravinda%20Gregory%20Perera"> Kurugamage Jude Pravinda Gregory Perera</a> </p> <p class="card-text"><strong>Abstract:</strong></p> E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. The Analyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling The Analyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20clustering" title="data clustering">data clustering</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20standardization" title=" data standardization"> data standardization</a>, <a href="https://publications.waset.org/abstracts/search?q=dimensionality%20reduction" title=" dimensionality reduction"> dimensionality reduction</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20computer%20interaction" title=" human computer interaction"> human computer interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20profiling" title=" user profiling"> user profiling</a> </p> <a href="https://publications.waset.org/abstracts/168329/the-analyzer-clustering-based-system-for-improving-business-productivity-by-analyzing-user-profiles-to-enhance-human-computer-interaction" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/168329.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">72</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13462</span> Frequency Recognition Models for Steady State Visual Evoked Potential Based Brain Computer Interfaces (BCIs)</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zeki%20Oralhan">Zeki Oralhan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mahmut%20Tokmak%C3%A7%C4%B1"> Mahmut Tokmakçı</a> </p> <p class="card-text"><strong>Abstract:</strong></p> SSVEP based brain computer interface (BCI) systems have been preferred, because of high information transfer rate (ITR) and practical use. ITR is the parameter of BCI overall performance. For high ITR value, one of specification BCI system is that has high accuracy. In this study, we investigated to recognize SSVEP with shorter time and lower error rate. In the experiment, there were 8 flickers on light crystal display (LCD). Participants gazed to flicker which had 12 Hz frequency and 50% duty cycle ratio on the LCD during 10 seconds. During the experiment, EEG signals were acquired via EEG device. The EEG data was filtered in preprocessing session. After that Canonical Correlation Analysis (CCA), Multiset CCA (MsetCCA), phase constrained CCA (PCCA), and Multiway CCA (MwayCCA) methods were applied on data. The highest average accuracy value was reached when MsetCCA was applied. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=brain%20computer%20interface" title="brain computer interface">brain computer interface</a>, <a href="https://publications.waset.org/abstracts/search?q=canonical%20correlation%20analysis" title=" canonical correlation analysis"> canonical correlation analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20computer%20interaction" title=" human computer interaction"> human computer interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=SSVEP" title=" SSVEP"> SSVEP</a> </p> <a href="https://publications.waset.org/abstracts/54342/frequency-recognition-models-for-steady-state-visual-evoked-potential-based-brain-computer-interfaces-bcis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/54342.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">266</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13461</span> Memorabilia of Suan Sunandha through Interactive User Interface</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nalinee%20Sophatsathit">Nalinee Sophatsathit</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The objectives of memorabilia of Suan Sunandha are to develop a general knowledge presentation about the historical royal garden through interactive graphic simulation technique and to employ high-functionality context in enhancing interactive user navigation. The approach infers non-intrusive display of relevant history in response to situational context. User’s navigation runs through the virtual reality campus, consisting of new and restored buildings. A flash back presentation of information pertaining to the history in the form of photos, paintings, and textual descriptions are displayed along each passing-by building. To keep the presentation lively, graphical simulation is created in a serendipity game play so that the user can both learn and enjoy the educational tour. The benefits of this human-computer interaction development are two folds. First, lively presentation technique and situational context modeling are developed that entail a usable paradigm of knowledge and information presentation combinations. Second, cost effective training and promotion for both internal personnel and public visitors to learn and keep informed of this historical royal garden can be furnished without the need for a dedicated public relations service. Future improvement on graphic simulation and ability based display can extend this work to be more realistic, user-friendly, and informative for all. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=interactive%20user%20navigation" title="interactive user navigation">interactive user navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=high-functionality%20context" title=" high-functionality context"> high-functionality context</a>, <a href="https://publications.waset.org/abstracts/search?q=situational%20context" title=" situational context"> situational context</a>, <a href="https://publications.waset.org/abstracts/search?q=human-computer%20interaction" title=" human-computer interaction"> human-computer interaction</a> </p> <a href="https://publications.waset.org/abstracts/4369/memorabilia-of-suan-sunandha-through-interactive-user-interface" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4369.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">357</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13460</span> An Erudite Technique for Face Detection and Recognition Using Curvature Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Jagadeesh%20Kumar">S. Jagadeesh Kumar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Face detection and recognition is an authoritative technology for image database management, video surveillance, and human computer interface (HCI). Face recognition is a rapidly nascent method, which has been extensively discarded in forensics such as felonious identification, tenable entree, and custodial security. This paper recommends an erudite technique using curvature analysis (CA) that has less false positives incidence, operative in different light environments and confiscates the artifacts that are introduced during image acquisition by ring correction in polar coordinate (RCP) method. This technique affronts mean and median filtering technique to remove the artifacts but it works in polar coordinate during image acquisition. Investigational fallouts for face detection and recognition confirms decent recitation even in diagonal orientation and stance variation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=curvature%20analysis" title="curvature analysis">curvature analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=ring%20correction%20in%20polar%20coordinate%20method" title=" ring correction in polar coordinate method"> ring correction in polar coordinate method</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20recognition" title=" face recognition"> face recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20computer%20interaction" title=" human computer interaction"> human computer interaction</a> </p> <a href="https://publications.waset.org/abstracts/70748/an-erudite-technique-for-face-detection-and-recognition-using-curvature-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70748.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">286</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13459</span> Advancing Trustworthy Human-robot Collaboration: Challenges and Opportunities in Diverse European Industrial Settings</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Margarida%20Porf%C3%ADrio%20Tom%C3%A1s">Margarida Porfírio Tomás</a>, <a href="https://publications.waset.org/abstracts/search?q=Paula%20Pereira"> Paula Pereira</a>, <a href="https://publications.waset.org/abstracts/search?q=Jos%C3%A9%20Manuel%20Palma%20Oliveira"> José Manuel Palma Oliveira</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The decline in employment rates across sectors like industry and construction is exacerbated by an aging workforce. This has far-reaching implications for the economy, including skills gaps, labour shortages, productivity challenges due to physical limitations, and workplace safety concerns. To sustain the workforce and pension systems, technology plays a pivotal role. Robots provide valuable support to human workers, and effective human-robot interaction is essential. FORTIS, a Horizon project, aims to address these challenges by creating a comprehensive Human-Robot Interaction (HRI) solution. This solution focuses on multi-modal communication and multi-aspect interaction, with a primary goal of maintaining a human-centric approach. By meeting the needs of both human workers and robots, FORTIS aims to facilitate efficient and safe collaboration. The project encompasses three key activities: 1) A Human-Centric Approach involving data collection, annotation, understanding human behavioural cognition, and contextual human-robot information exchange. 2) A Robotic-Centric Focus addressing the unique requirements of robots during the perception and evaluation of human behaviour. 3) Ensuring Human-Robot Trustworthiness through measures such as human-robot digital twins, safety protocols, and resource allocation. Factor Social, a project partner, will analyse psycho-physiological signals that influence human factors, particularly in hazardous working conditions. The analysis will be conducted using a combination of case studies, structured interviews, questionnaires, and a comprehensive literature review. However, the adoption of novel technologies, particularly those involving human-robot interaction, often faces hurdles related to acceptance. To address this challenge, FORTIS will draw upon insights from Social Sciences and Humanities (SSH), including risk perception and technology acceptance models. Throughout its lifecycle, FORTIS will uphold a human-centric approach, leveraging SSH methodologies to inform the design and development of solutions. This project received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No 101135707 (FORTIS). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=skills%20gaps" title="skills gaps">skills gaps</a>, <a href="https://publications.waset.org/abstracts/search?q=productivity%20challenges" title=" productivity challenges"> productivity challenges</a>, <a href="https://publications.waset.org/abstracts/search?q=workplace%20safety" title=" workplace safety"> workplace safety</a>, <a href="https://publications.waset.org/abstracts/search?q=human-robot%20interaction" title=" human-robot interaction"> human-robot interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=human-centric%20approach" title=" human-centric approach"> human-centric approach</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20sciences%20and%20humanities" title=" social sciences and humanities"> social sciences and humanities</a>, <a href="https://publications.waset.org/abstracts/search?q=risk%20perception" title=" risk perception"> risk perception</a> </p> <a href="https://publications.waset.org/abstracts/184035/advancing-trustworthy-human-robot-collaboration-challenges-and-opportunities-in-diverse-european-industrial-settings" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/184035.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">52</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13458</span> Hybrid Velocity Control Approach for Tethered Aerial Vehicle</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Lovesh%20Goyal">Lovesh Goyal</a>, <a href="https://publications.waset.org/abstracts/search?q=Pushkar%20Dave"> Pushkar Dave</a>, <a href="https://publications.waset.org/abstracts/search?q=Prajyot%20Jadhav"> Prajyot Jadhav</a>, <a href="https://publications.waset.org/abstracts/search?q=GonnaYaswanth"> GonnaYaswanth</a>, <a href="https://publications.waset.org/abstracts/search?q=Sakshi%20Giri"> Sakshi Giri</a>, <a href="https://publications.waset.org/abstracts/search?q=Sahil%20Dharme"> Sahil Dharme</a>, <a href="https://publications.waset.org/abstracts/search?q=Rushika%20Joshi"> Rushika Joshi</a>, <a href="https://publications.waset.org/abstracts/search?q=Rishabh%20Verma"> Rishabh Verma</a>, <a href="https://publications.waset.org/abstracts/search?q=Shital%20Chiddarwar"> Shital Chiddarwar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> With the rising need for human-robot interaction, researchers have proposed and tested multiple models with varying degrees of success. A few of these models performed on aerial platforms are commonly known as Tethered Aerial Systems. These aerial vehicles may be powered continuously by a tether cable, which addresses the predicament of the short battery life of quadcopters. This system finds applications to minimize humanitarian efforts for industrial, medical, agricultural, and service uses. However, a significant challenge in employing such systems is that it necessities attaining smooth and secure robot-human interaction while ensuring that the forces from the tether remain within the standard comfortable range for the humans. To tackle this problem, a hybrid control method that could switch between two control techniques: constant control input and the steady-state solution, is implemented. The constant control approach is implemented when a person is far from the target location, and error is thought to be eventually constant. The controller switches to the steady-state approach when the person reaches within a specific range of the goal position. Both strategies take into account human velocity feedback. This hybrid technique enhances the outcomes by assisting the person to reach the desired location while decreasing the human's unwanted disturbance throughout the process, thereby keeping the interaction between the robot and the subject smooth. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=unmanned%20aerial%20vehicle" title="unmanned aerial vehicle">unmanned aerial vehicle</a>, <a href="https://publications.waset.org/abstracts/search?q=tethered%20system" title=" tethered system"> tethered system</a>, <a href="https://publications.waset.org/abstracts/search?q=physical%20human-robot%20interaction" title=" physical human-robot interaction"> physical human-robot interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20control" title=" hybrid control"> hybrid control</a> </p> <a href="https://publications.waset.org/abstracts/156082/hybrid-velocity-control-approach-for-tethered-aerial-vehicle" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156082.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">98</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13457</span> Tuned Mass Damper Effects of Stationary People on Structural Damping of Footbridge Due to Dynamic Interaction in Vertical Motion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Yoneda">M. Yoneda</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It is known that stationary human occupants act as dynamic mass-spring-damper systems and can change the modal properties of civil engineering structures. This paper describes the full scale measurement to explain the tuned mass damper effects of stationary people on structural damping of footbridge with center span length of 33 m. A human body can be represented by a lumped system consisting of masses, springs, and dashpots. Complex eigenvalue calculation is also conducted by using ISO5982:1981 human model (two degree of freedom system). Based on experimental and analytical results for the footbridge with the stationary people in the standing position, it is demonstrated that stationary people behave as a tuned mass damper and that ISO5982:1981 human model can explain the structural damping characteristics measured in the field. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20interaction" title="dynamic interaction">dynamic interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=footbridge" title=" footbridge"> footbridge</a>, <a href="https://publications.waset.org/abstracts/search?q=stationary%20people" title=" stationary people"> stationary people</a>, <a href="https://publications.waset.org/abstracts/search?q=structural%20damping" title=" structural damping"> structural damping</a> </p> <a href="https://publications.waset.org/abstracts/47682/tuned-mass-damper-effects-of-stationary-people-on-structural-damping-of-footbridge-due-to-dynamic-interaction-in-vertical-motion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/47682.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">274</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13456</span> Application of Industrial Ergonomics in Vehicle Service System Design</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zhao%20Yu">Zhao Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhi-Nan%20Zhang"> Zhi-Nan Zhang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> More and more interactive devices are used in the transportation service system. Our mobile phones, on-board computers, and Head-Up Displays (HUDs) can all be used as the tools of the in-car service system. People can access smart systems with different terminals such as mobile phones, computers, pads and even their cars and watches. Different forms of terminals bring the different quality of interaction by the various human-computer Interaction modes. The new interactive devices require good ergonomics design at each stage of the whole design process. According to the theory of human factors and ergonomics, this paper compared three types of interactive devices by four driving tasks. Forty-eight drivers were chosen to experience these three interactive devices (mobile phones, on-board computers, and HUDs) by a simulate driving process. The subjects evaluated ergonomics performance and subjective workload after the process. And subjects were encouraged to support suggestions for improving the interactive device. The result shows that different interactive devices have different advantages in driving tasks, especially in non-driving tasks such as information and entertainment fields. Compared with mobile phones and onboard groups, the HUD groups had shorter response times in most tasks. The tasks of slow-up and the emergency braking are less accurate than the performance of a control group, which may because the haptic feedback of these two tasks is harder to distinguish than the visual information. Simulated driving is also helpful in improving the design of in-vehicle interactive devices. The paper summarizes the ergonomics characteristics of three in-vehicle interactive devices. And the research provides a reference for the future design of in-vehicle interactive devices through an ergonomic approach to ensure a good interaction relationship between the driver and the in-vehicle service system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20factors" title="human factors">human factors</a>, <a href="https://publications.waset.org/abstracts/search?q=industrial%20ergonomics" title=" industrial ergonomics"> industrial ergonomics</a>, <a href="https://publications.waset.org/abstracts/search?q=transportation%20system" title=" transportation system"> transportation system</a>, <a href="https://publications.waset.org/abstracts/search?q=usability" title=" usability"> usability</a>, <a href="https://publications.waset.org/abstracts/search?q=vehicle%20user%20interface" title=" vehicle user interface"> vehicle user interface</a> </p> <a href="https://publications.waset.org/abstracts/111147/application-of-industrial-ergonomics-in-vehicle-service-system-design" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/111147.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13455</span> A Human Centered Design of an Exoskeleton Using Multibody Simulation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sebastian%20K%C3%B6lbl">Sebastian Kölbl</a>, <a href="https://publications.waset.org/abstracts/search?q=Thomas%20Reitmaier"> Thomas Reitmaier</a>, <a href="https://publications.waset.org/abstracts/search?q=Mathias%20Hartmann"> Mathias Hartmann</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Trial and error approaches to adapt wearable support structures to human physiology are time consuming and elaborate. However, during preliminary design, the focus lies on understanding the interaction between exoskeleton and the human body in terms of forces and moments, namely body mechanics. For the study at hand, a multi-body simulation approach has been enhanced to evaluate actual forces and moments in a human dummy model with and without a digital mock-up of an active exoskeleton. Therefore, different motion data have been gathered and processed to perform a musculosceletal analysis. The motion data are ground reaction forces, electromyography data (EMG) and human motion data recorded with a marker-based motion capture system. Based on the experimental data, the response of the human dummy model has been calibrated. Subsequently, the scalable human dummy model, in conjunction with the motion data, is connected with the exoskeleton structure. The results of the human-machine interaction (HMI) simulation platform are in particular resulting contact forces and human joint forces to compare with admissible values with regard to the human physiology. Furthermore, it provides feedback for the sizing of the exoskeleton structure in terms of resulting interface forces (stress justification) and the effect of its compliance. A stepwise approach for the setup and validation of the modeling strategy is presented and the potential for a more time and cost-effective development of wearable support structures is outlined. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=assistive%20devices" title="assistive devices">assistive devices</a>, <a href="https://publications.waset.org/abstracts/search?q=ergonomic%20design" title=" ergonomic design"> ergonomic design</a>, <a href="https://publications.waset.org/abstracts/search?q=inverse%20dynamics" title=" inverse dynamics"> inverse dynamics</a>, <a href="https://publications.waset.org/abstracts/search?q=inverse%20kinematics" title=" inverse kinematics"> inverse kinematics</a>, <a href="https://publications.waset.org/abstracts/search?q=multibody%20simulation" title=" multibody simulation"> multibody simulation</a> </p> <a href="https://publications.waset.org/abstracts/151467/a-human-centered-design-of-an-exoskeleton-using-multibody-simulation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151467.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">162</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13454</span> Exploring the Effectiveness of Robotic Companions Through the Use of Symbiotic Autonomous Plant Care Robots</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Angelos%20Kaminis">Angelos Kaminis</a>, <a href="https://publications.waset.org/abstracts/search?q=Dakotah%20Stirnweis"> Dakotah Stirnweis</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Advances in robotic technology have driven the development of improved robotic companions in the last couple decades. However, commercially available robotic companions lack the ability to create an emotional connection with their user. By developing a companion robot that has a symbiotic relationship with a plant, an element of co-dependency is introduced into the human companion robot dynamic. This companion robot, while theoretically capable of providing most of the plant’s needs, still requires human interaction for watering, moving obstacles, and solar panel cleaning. To facilitate the interaction between human and robot, the robot is capable of limited auditory and visual communication to help express its and the plant’s needs. This paper seeks to fully describe the Autonomous Plant Care Robot system and its symbiotic relationship with its botanical ward and the plant and robot’s dependent relationship with their owner. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=symbiotic" title="symbiotic">symbiotic</a>, <a href="https://publications.waset.org/abstracts/search?q=robotics" title=" robotics"> robotics</a>, <a href="https://publications.waset.org/abstracts/search?q=autonomous" title=" autonomous"> autonomous</a>, <a href="https://publications.waset.org/abstracts/search?q=plant-care" title=" plant-care"> plant-care</a>, <a href="https://publications.waset.org/abstracts/search?q=companion" title=" companion"> companion</a> </p> <a href="https://publications.waset.org/abstracts/147471/exploring-the-effectiveness-of-robotic-companions-through-the-use-of-symbiotic-autonomous-plant-care-robots" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147471.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13453</span> Analysis of the Use of a NAO Robot to Improve Social Skills in Children with Autism Spectrum Disorder in Saudi Arabia</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Eman%20Alarfaj">Eman Alarfaj</a>, <a href="https://publications.waset.org/abstracts/search?q=Hissah%20Alabdullatif"> Hissah Alabdullatif</a>, <a href="https://publications.waset.org/abstracts/search?q=Huda%20Alabdullatif"> Huda Alabdullatif</a>, <a href="https://publications.waset.org/abstracts/search?q=Ghazal%20Albakri"> Ghazal Albakri</a>, <a href="https://publications.waset.org/abstracts/search?q=Nor%20Shahriza%20Abdul%20Karim"> Nor Shahriza Abdul Karim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Autism Spectrum Disorder is extensively spread amid children; it affects their social, communication and interactive skills. As robotics technology has been proven to be a significant helpful utility those able individuals to overcome their disabilities. Robotic technology is used in ASD therapy. The purpose of this research is to show how Nao robots can improve the social skills for children who suffer from autism in Saudi Arabia by interacting with the autistic child and perform a number of tasks. The objective of this research is to identify, implement, and test the effectiveness of the module for interacting with ASD children in an autism center in Saudi Arabia. The methodology in this study followed the ten layers of protocol that needs to be followed during any human-robot interaction. Also, in order to elicit the scenario module, TEACCH Autism Program was adopted. Six different qualified interaction modules have been elicited and designed in this study; the robot will be programmed to perform these modules in a series of controlled interaction sessions with the Autistic children to enhance their social skills. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=humanoid%20robot%20Nao" title="humanoid robot Nao">humanoid robot Nao</a>, <a href="https://publications.waset.org/abstracts/search?q=ASD" title=" ASD"> ASD</a>, <a href="https://publications.waset.org/abstracts/search?q=human-robot%20interaction" title=" human-robot interaction"> human-robot interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20skills" title=" social skills"> social skills</a> </p> <a href="https://publications.waset.org/abstracts/87694/analysis-of-the-use-of-a-nao-robot-to-improve-social-skills-in-children-with-autism-spectrum-disorder-in-saudi-arabia" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/87694.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">264</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13452</span> Proteomics Associated with Colonization of Human Enteric Pathogen on Solanum lycopersicum</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Neha%20Bhadauria">Neha Bhadauria</a>, <a href="https://publications.waset.org/abstracts/search?q=Indu%20Gaur"> Indu Gaur</a>, <a href="https://publications.waset.org/abstracts/search?q=Shilpi%20Shilpi"> Shilpi Shilpi</a>, <a href="https://publications.waset.org/abstracts/search?q=Susmita%20Goswami"> Susmita Goswami</a>, <a href="https://publications.waset.org/abstracts/search?q=Prabir%20K.%20Paul"> Prabir K. Paul</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aerial surface of plants colonized by Human Enteric Pathogens ()has been implicated in outbreaks of enteric diseases in humans. Practice of organic farming primarily using animal dung as manure and sewage water for irrigation are the most significant source of enteric pathogens on the surface of leaves, fruits and vegetables. The present work aims to have an insight into the molecular mechanism of interaction of Human Enteric Pathogens or their metabolites with cell wall receptors in plants. Tomato plants grown under aseptic conditions at 12 hours L/D photoperiod, 25±1°C and 75% RH were inoculated individually with S. fonticola and K. pneumonia. The leaves from treated plants were sampled after 24 and 48 hours of incubation. The cell wall and cytoplasmic proteins were extracted and isocratically separated on 1D SDS-PAGE. The sampled leaves were also subjected to formaldehyde treatment prior to isolation of cytoplasmic proteins to study protein-protein interactions induced by Human Enteric Pathogens. Protein bands extracted from the gel were subjected to MALDI-TOF-TOF MS analysis. The foremost interaction of Human Enteric Pathogens on the plant surface was found to be cell wall bound receptors which possibly set ups a wave a critical protein-protein interaction in cytoplasm. The study revealed the expression and suppression of specific cytoplasmic and cell wall-bound proteins, some of them being important components of signaling pathways. The results also demonstrated HEP induced rearrangement of signaling pathways which possibly are crucial for adaptation of these pathogens to plant surface. At the end of the study, it can be concluded that controlling the over-expression or suppression of these specific proteins rearrange the signaling pathway thus reduces the outbreaks of food-borne illness. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cytoplasmic%20protein" title="cytoplasmic protein">cytoplasmic protein</a>, <a href="https://publications.waset.org/abstracts/search?q=cell%20wall-bound%20protein" title=" cell wall-bound protein"> cell wall-bound protein</a>, <a href="https://publications.waset.org/abstracts/search?q=Human%20Enteric%20Pathogen%20%28HEP%29" title=" Human Enteric Pathogen (HEP)"> Human Enteric Pathogen (HEP)</a>, <a href="https://publications.waset.org/abstracts/search?q=protein-protein%20interaction" title=" protein-protein interaction "> protein-protein interaction </a> </p> <a href="https://publications.waset.org/abstracts/60905/proteomics-associated-with-colonization-of-human-enteric-pathogen-on-solanum-lycopersicum" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/60905.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">277</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13451</span> Fitness Action Recognition Based on MediaPipe</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Zixuan%20Xu">Zixuan Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=Yichun%20Lou"> Yichun Lou</a>, <a href="https://publications.waset.org/abstracts/search?q=Yang%20Song"> Yang Song</a>, <a href="https://publications.waset.org/abstracts/search?q=Zihuai%20Lin"> Zihuai Lin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> MediaPipe is an open-source machine learning computer vision framework that can be ported into a multi-platform environment, which makes it easier to use it to recognize the human activity. Based on this framework, many human recognition systems have been created, but the fundamental issue is the recognition of human behavior and posture. In this paper, two methods are proposed to recognize human gestures based on MediaPipe, the first one uses the Adaptive Boosting algorithm to recognize a series of fitness gestures, and the second one uses the Fast Dynamic Time Warping algorithm to recognize 413 continuous fitness actions. These two methods are also applicable to any human posture movement recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=MediaPipe" title=" MediaPipe"> MediaPipe</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20boosting" title=" adaptive boosting"> adaptive boosting</a>, <a href="https://publications.waset.org/abstracts/search?q=fast%20dynamic%20time%20warping" title=" fast dynamic time warping"> fast dynamic time warping</a> </p> <a href="https://publications.waset.org/abstracts/160758/fitness-action-recognition-based-on-mediapipe" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160758.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">118</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13450</span> Study of Multimodal Resources in Interactions Involving Children with Autistic Spectrum Disorders</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fernanda%20Miranda%20da%20Cruz">Fernanda Miranda da Cruz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to systematize, descriptively and analytically, the relations between language, body and material world explored in a specific empirical context: everyday co-presence interactions between children diagnosed with Autistic Spectrum Disease ASD and various interlocutors. We will work based on 20 hours of an audiovisual corpus in Brazilian Portuguese language. This analysis focuses on 1) the analysis of daily interactions that have the presence/participation of subjects with a diagnosis of ASD based on an embodied interaction perspective; 2) the study of the status and role of gestures, body and material world in the construction and constitution of human interaction and its relation with linguistic-cognitive processes and Autistic Spectrum Disorders; 3) to highlight questions related to the field of videoanalysis, such as: procedures for recording interactions in complex environments (involving many participants, use of objects and body movement); the construction of audiovisual corpora for linguistic-interaction research; the invitation to a visual analytical mentality of human social interactions involving not only the verbal aspects that constitute it, but also the physical space, the body and the material world. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autism%20spectrum%20disease" title="autism spectrum disease">autism spectrum disease</a>, <a href="https://publications.waset.org/abstracts/search?q=multimodality" title=" multimodality"> multimodality</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20interaction" title=" social interaction"> social interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=non-verbal%20interactions" title=" non-verbal interactions"> non-verbal interactions</a> </p> <a href="https://publications.waset.org/abstracts/121698/study-of-multimodal-resources-in-interactions-involving-children-with-autistic-spectrum-disorders" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/121698.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">114</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13449</span> Comparison of Interactive Performance of Clicking Tasks Using Cursor Control Devices under Different Feedback Modes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jinshou%20Shi">Jinshou Shi</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiaozhou%20Zhou"> Xiaozhou Zhou</a>, <a href="https://publications.waset.org/abstracts/search?q=Yingwei%20Zhou"> Yingwei Zhou</a>, <a href="https://publications.waset.org/abstracts/search?q=Tuoyang%20Zhou"> Tuoyang Zhou</a>, <a href="https://publications.waset.org/abstracts/search?q=Ning%20Li"> Ning Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Chi%20Zhang"> Chi Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhanshuo%20Zhang"> Zhanshuo Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Ziang%20Chen"> Ziang Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In order to select the optimal interaction method for common computer click tasks, the click experiment test adopts the ISO 9241-9 task paradigm, using four common operations: mouse, trackball, touch, and eye control under visual feedback, auditory feedback, and no feedback. Through data analysis of various parameters of movement time, throughput, and accuracy, it is found that the movement time of touch-control is the shortest, the operation accuracy and throughput are higher than others, and the overall operation performance is the best. In addition, the motion time of the click operation with auditory feedback is significantly lower than the other two feedback methods in each operation mode experiment. In terms of the size of the click target, it is found that when the target is too small (less than 14px), the click performance of all aspects is reduced, so it is proposed that the design of the interface button should not be less than 28px. In this article, we discussed in detail the advantages and disadvantages of the operation and feedback methods, and the results of the discussion of the click operation can be applied to the design of the buttons in the interactive interface. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cursor%20control%20performance" title="cursor control performance">cursor control performance</a>, <a href="https://publications.waset.org/abstracts/search?q=feedback" title=" feedback"> feedback</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20computer%20interaction" title=" human computer interaction"> human computer interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=throughput" title=" throughput"> throughput</a> </p> <a href="https://publications.waset.org/abstracts/130066/comparison-of-interactive-performance-of-clicking-tasks-using-cursor-control-devices-under-different-feedback-modes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/130066.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">196</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13448</span> The Power of Symbol in the Powerful Symbol: Case Study of Symbol Visualization Change in the Form of Pelinggih in Bali</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=I%20Nyoman%20Larry%20Julianto">I Nyoman Larry Julianto</a>, <a href="https://publications.waset.org/abstracts/search?q=Pribadi%20Widodo"> Pribadi Widodo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The phenomenon of cultural change is the result of the process of shifting, reducing and adding elements of cultural systems because of the process of interaction with the environment. Interestingly in the temple area in Bali, there is a phenomenon of symbol visualization change in the form of pelinggih, which is in the shaped of the car. As a result of the sacralization process of the symbol, the function of its essence is remained as a place of worship. Hindu communities in Bali can accept that phenomenon in their religious life as a process of today's cultural acculturation. Through an interpretive ethnographic study, it is tried to understand the 'creative concept’of that symbol materialization in its interaction process. The result of the research stated that the interaction value of the symbol visualization change is constructed from the application of 'value' and 'meaning' of the previous pelinggih. The ritual procession and the reinforcement of the mythical mind, make the 'value' of the visualization change of the pelinggih leads to a sacred, religious conception. In the future, the development of the human mind is more functional, but it does not eliminate the mythological value due to the interaction with the surrounding social environment, so the visualization of the symbol in the form of pelinggih which is in the shape of the car will be the identity of a new cultural heritage. The understanding of the influence of mental representation of human being in an effort toward his spiritual awareness will be able to be the advanced research. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=the%20power%20of%20symbol" title="the power of symbol">the power of symbol</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20change" title=" visual change"> visual change</a>, <a href="https://publications.waset.org/abstracts/search?q=pelinggih" title=" pelinggih"> pelinggih</a>, <a href="https://publications.waset.org/abstracts/search?q=Bali" title=" Bali"> Bali</a> </p> <a href="https://publications.waset.org/abstracts/86256/the-power-of-symbol-in-the-powerful-symbol-case-study-of-symbol-visualization-change-in-the-form-of-pelinggih-in-bali" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/86256.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">165</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13447</span> Emotion Detection in a General Human-Robot Interaction System Optimized for Embedded Platforms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Julio%20Vega">Julio Vega</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Expression recognition is a field of Artificial Intelligence whose main objectives are to recognize basic forms of affective expression that appear on people’s faces and contributing to behavioral studies. In this work, a ROS node has been developed that, based on Deep Learning techniques, is capable of detecting the facial expressions of the people that appear in the image. These algorithms were optimized so that they can be executed in real time on an embedded platform. The experiments were carried out in a PC with a USB camera and in a Raspberry Pi 4 with a PiCamera. The final results shows a plausible system, which is capable to work in real time even in an embedded platform. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=python" title="python">python</a>, <a href="https://publications.waset.org/abstracts/search?q=low-cost" title=" low-cost"> low-cost</a>, <a href="https://publications.waset.org/abstracts/search?q=raspberry%20pi" title=" raspberry pi"> raspberry pi</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20detection" title=" emotion detection"> emotion detection</a>, <a href="https://publications.waset.org/abstracts/search?q=human-robot%20interaction" title=" human-robot interaction"> human-robot interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=ROS%20node" title=" ROS node"> ROS node</a> </p> <a href="https://publications.waset.org/abstracts/151311/emotion-detection-in-a-general-human-robot-interaction-system-optimized-for-embedded-platforms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/151311.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13446</span> Analyzing the Perceptions of Emotions in Aesthetic Music</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Abigail%20Wiafe">Abigail Wiafe</a>, <a href="https://publications.waset.org/abstracts/search?q=Charles%20Nutrokpor"> Charles Nutrokpor</a>, <a href="https://publications.waset.org/abstracts/search?q=Adelaide%20Oduro-Asante"> Adelaide Oduro-Asante</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The advancement of technology is rapidly making people more receptive to music as computer-generated music requires minimal human interventions. Though algorithms are applied to generate music, the human experience of emotions is still explored. Thus, this study investigates the emotions humans experience listening to computer-generated music that possesses aesthetic qualities. Forty-two subjects participated in the survey. The selection process was purely arbitrary since it was based on convenience. Subjects listened and evaluated the emotions experienced from the computer-generated music through an online questionnaire. The Likert scale was used to rate the emotional levels after the music listening experience. The findings suggest that computer-generated music possesses aesthetic qualities that do not affect subjects' emotions as long as they are pleased with the music. Furthermore, computer-generated music has unique creativity, and expressioneven though the music produced is meaningless, the computational models developed are unable to present emotional contents in music as humans do. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aesthetic" title="aesthetic">aesthetic</a>, <a href="https://publications.waset.org/abstracts/search?q=algorithms" title=" algorithms"> algorithms</a>, <a href="https://publications.waset.org/abstracts/search?q=emotions" title=" emotions"> emotions</a>, <a href="https://publications.waset.org/abstracts/search?q=computer-generated%20music" title=" computer-generated music"> computer-generated music</a> </p> <a href="https://publications.waset.org/abstracts/148498/analyzing-the-perceptions-of-emotions-in-aesthetic-music" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/148498.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">135</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13445</span> Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Silvia%20Santano%20Guill%C3%A9n">Silvia Santano Guillén</a>, <a href="https://publications.waset.org/abstracts/search?q=Luigi%20Lo%20Iacono"> Luigi Lo Iacono</a>, <a href="https://publications.waset.org/abstracts/search?q=Christian%20Meder"> Christian Meder</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the main aims of current social robotic research is to improve the robots&rsquo; abilities to interact with humans. In order to achieve an interaction similar to that among humans, robots should be able to communicate in an intuitive and natural way and appropriately interpret human affects during social interactions. Similarly to how humans are able to recognize emotions in other humans, machines are capable of extracting information from the various ways humans convey emotions&mdash;including facial expression, speech, gesture or text&mdash;and using this information for improved human computer interaction. This can be described as Affective Computing, an interdisciplinary field that expands into otherwise unrelated fields like psychology and cognitive science and involves the research and development of systems that can recognize and interpret human affects. To leverage these emotional capabilities by embedding them in humanoid robots is the foundation of the concept Affective Robots, which has the objective of making robots capable of sensing the user&rsquo;s current mood and personality traits and adapt their behavior in the most appropriate manner based on that. In this paper, the emotion recognition capabilities of the humanoid robot Pepper are experimentally explored, based on the facial expressions for the so-called basic emotions, as well as how it performs in contrast to other state-of-the-art approaches with both expression databases compiled in academic environments and real subjects showing posed expressions as well as spontaneous emotional reactions. The experiments&rsquo; results show that the detection accuracy amongst the evaluated approaches differs substantially. The introduced experiments offer a general structure and approach for conducting such experimental evaluations. The paper further suggests that the most meaningful results are obtained by conducting experiments with real subjects expressing the emotions as spontaneous reactions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=affective%20computing" title="affective computing">affective computing</a>, <a href="https://publications.waset.org/abstracts/search?q=emotion%20recognition" title=" emotion recognition"> emotion recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=humanoid%20robot" title=" humanoid robot"> humanoid robot</a>, <a href="https://publications.waset.org/abstracts/search?q=human-robot-interaction%20%28HRI%29" title=" human-robot-interaction (HRI)"> human-robot-interaction (HRI)</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20robots" title=" social robots"> social robots</a> </p> <a href="https://publications.waset.org/abstracts/78467/affective-robots-evaluation-of-automatic-emotion-recognition-approaches-on-a-humanoid-robot-towards-emotionally-intelligent-machines" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/78467.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">235</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13444</span> Integrated Gesture and Voice-Activated Mouse Control System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Dev%20Pratap%20Singh">Dev Pratap Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Harshika%20Hasija"> Harshika Hasija</a>, <a href="https://publications.waset.org/abstracts/search?q=Ashwini%20S."> Ashwini S.</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computers using hand gestures and voice commands. The system leverages advanced computer vision techniques using the Media Pipe framework and OpenCV to detect and interpret real-time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the speech recognition library allows for seamless execution of tasks like web searches, location navigation, and gesture control in the system through voice commands. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title="gesture recognition">gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20tracking" title=" hand tracking"> hand tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=natural%20language%20processing" title=" natural language processing"> natural language processing</a>, <a href="https://publications.waset.org/abstracts/search?q=voice%20assistant" title=" voice assistant"> voice assistant</a> </p> <a href="https://publications.waset.org/abstracts/193896/integrated-gesture-and-voice-activated-mouse-control-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/193896.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">10</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13443</span> Musical Composition by Computer with Inspiration from Files of Different Media Types</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cassandra%20Pratt%20Romero">Cassandra Pratt Romero</a>, <a href="https://publications.waset.org/abstracts/search?q=Andres%20Gomez%20de%20Silva%20Garza"> Andres Gomez de Silva Garza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper describes a computational system designed to imitate human inspiration during musical composition. The system is called MIS (Musical Inspiration Simulator). The MIS system is inspired by media to which human beings are exposed daily (visual, textual, or auditory) to create new musical compositions based on the emotions detected in said media. After building the system we carried out a series of evaluations with volunteer users who used MIS to compose music based on images, texts, and audio files. The volunteers were asked to judge the harmoniousness and innovation in the system's compositions. An analysis of the results points to the difficulty of computational analysis of the characteristics of the media to which we are exposed daily, as human emotions have a subjective character. This observation will direct future improvements in the system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20inspiration" title="human inspiration">human inspiration</a>, <a href="https://publications.waset.org/abstracts/search?q=musical%20composition" title=" musical composition"> musical composition</a>, <a href="https://publications.waset.org/abstracts/search?q=musical%20composition%20by%20computer" title=" musical composition by computer"> musical composition by computer</a>, <a href="https://publications.waset.org/abstracts/search?q=theory%20of%20sensation%20and%20human%20perception" title=" theory of sensation and human perception"> theory of sensation and human perception</a> </p> <a href="https://publications.waset.org/abstracts/90897/musical-composition-by-computer-with-inspiration-from-files-of-different-media-types" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/90897.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">183</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13442</span> Understanding the Impact of Ambience, Acoustics, and Chroma on User Experience through Different Mediums and Study Scenarios</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mushty%20Srividya">Mushty Srividya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Humans that inhabit a designed space consciously or unconsciously accept the spaces which have an impact on how they perceive, feel and act accordingly. Spaces that are more interactive and communicative with the human senses become more interesting. Interaction in architecture is the art of building relationships between the user and the spaces. Often spaces are form-based, function-based or aesthetically pleasing spaces but they are not interactive with the user which actually has a greater impact on how the user perceives the designed space and appreciate it. It is very necessary for a designer to understand and appreciate the human character and design accordingly, wherein the user gets the flexibility to explore and experience it for themselves rather than the designed space dictating the user how to perceive or feel in that space. In this interaction between designed spaces and the user, a designer needs to understand the spatial potential and user’s needs because the design language varies with varied situations in accordance with these factors. Designers often have the tendency to construct spaces with their perspectives, observations, and sense the space in their range of different angles rather than the users. It is, therefore, necessary to understand the potential of the space by understanding different factors and improve the quality of space with the help of creating better interactive spaces. For an interaction to occur between the user and space, there is a need for some medium. In this paper, light, color, and sound will be used as the mediums to understand and create interactions between the user and space, considering these to be the primary sources which would not require any physical touch in the space and would help in triggering the human senses. This paper involves in studying and understanding the impact of light, color and sound on different typologies of spaces on the user through different findings, articles, case studies and surveys and try to get links between these three mediums to create an interaction. This paper also deals with understanding in which medium takes an upper hand in a varied typology of spaces and identify different techniques which would create interactions between the user and space with the help of light, color, and sound. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=color" title="color">color</a>, <a href="https://publications.waset.org/abstracts/search?q=communicative%20spaces" title=" communicative spaces"> communicative spaces</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20factors" title=" human factors"> human factors</a>, <a href="https://publications.waset.org/abstracts/search?q=interactive%20spaces" title=" interactive spaces"> interactive spaces</a>, <a href="https://publications.waset.org/abstracts/search?q=light" title=" light"> light</a>, <a href="https://publications.waset.org/abstracts/search?q=sound" title=" sound"> sound</a> </p> <a href="https://publications.waset.org/abstracts/81994/understanding-the-impact-of-ambience-acoustics-and-chroma-on-user-experience-through-different-mediums-and-study-scenarios" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/81994.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">211</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13441</span> The Application of System Approach to Knowledge Management and Human Resource Management Evidence from Tehran Municipality</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Vajhollah%20Ghorbanizadeh">Vajhollah Ghorbanizadeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Seyed%20Mohsen%20Asadi"> Seyed Mohsen Asadi</a>, <a href="https://publications.waset.org/abstracts/search?q=Mirali%20Seyednaghavi"> Mirali Seyednaghavi</a>, <a href="https://publications.waset.org/abstracts/search?q=Davoud%20Hoseynpour"> Davoud Hoseynpour</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the current era, all organizations need knowledge to be able to manage the diverse human resources. Creative, dynamic and knowledge-based Human resources are important competitive advantage and the scarcest resource in today's knowledge-based economy. In addition managers with skills of knowledge management must be aware of human resource management science. It is now generally accepted that successful implementation of knowledge management requires dynamic interaction between knowledge management and human resource management. This is emphasized at systematic approach to knowledge management as well. However human resource management can be complementary of knowledge management because human resources management with the aim of empowering human resources as the key resource organizations in the 21st century, the use of other resources, creating and growing and developing today. Thus, knowledge is the major capital of every organization which is introduced through the process of knowledge management. In this context, knowledge management is systematic approach to create, receive, organize, access, and use of knowledge and learning in the organization. This article aims to define and explain the concepts of knowledge management and human resource management and the importance of these processes and concepts. Literature related to knowledge management and human resource management as well as related topics were studied, then to design, illustrate and provide a theoretical model to explain the factors affecting the relationship between knowledge management and human resource management and knowledge management system approach, for schematic design and are drawn. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=systemic%20approach" title="systemic approach">systemic approach</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20resources" title=" human resources"> human resources</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge" title=" knowledge"> knowledge</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20resources%20management" title=" human resources management"> human resources management</a>, <a href="https://publications.waset.org/abstracts/search?q=knowledge%20management" title=" knowledge management"> knowledge management</a> </p> <a href="https://publications.waset.org/abstracts/42223/the-application-of-system-approach-to-knowledge-management-and-human-resource-management-evidence-from-tehran-municipality" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/42223.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">376</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13440</span> Static and Dynamic Hand Gesture Recognition Using Convolutional Neural Network Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Keyi%20Wang">Keyi Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Similar to the touchscreen, hand gesture based human-computer interaction (HCI) is a technology that could allow people to perform a variety of tasks faster and more conveniently. This paper proposes a training method of an image-based hand gesture image and video clip recognition system using a CNN (Convolutional Neural Network) with a dataset. A dataset containing 6 hand gesture images is used to train a 2D CNN model. ~98% accuracy is achieved. Furthermore, a 3D CNN model is trained on a dataset containing 4 hand gesture video clips resulting in ~83% accuracy. It is demonstrated that a Cozmo robot loaded with pre-trained models is able to recognize static and dynamic hand gestures. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title="deep learning">deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=hand%20gesture%20recognition" title=" hand gesture recognition"> hand gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/132854/static-and-dynamic-hand-gesture-recognition-using-convolutional-neural-network-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132854.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13439</span> Survey of the Role of Contextualism in the Designing of Cultural Constructions Based on Rapoport Views</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=E.%20Zarei">E. Zarei</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Bazaei"> M. Bazaei</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Seifi"> A. Seifi</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Keshavarzi"> A. Keshavarzi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Amos Rapoport, based on his anthropology approach, believed that the space origins from the human body and influences on human body mutually. As a holistic approach in architecture, Contextualism describes a collection of views in philosophy which emphasize the context in which an action, utterance, or expression occurs, and argues that, in some important respect, the action, utterance, or expression can only be understood relative to that context. In this approach, the main goal – studying the role of cultural component in the Contextualism construction shaping up, based on Amos Rapoport’s anthropology approach- has being done by descriptive- analytic method. The results of the research indicate that in the field of Contextualism designing, referring to the cultural aspects are as necessary as the physical dimensions of a construction. Rapoport believes that the shape of a construction is influenced by cultural aspects and he suggests a kind of mutual interaction between human and environment that should be considered in housing. The mail goal of contextual architecture is to establish an interaction between environment, human and culture. According to this approach, a desirable design should be in harmony with this approach. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Amos%20Rapoport" title="Amos Rapoport">Amos Rapoport</a>, <a href="https://publications.waset.org/abstracts/search?q=anthropology" title=" anthropology"> anthropology</a>, <a href="https://publications.waset.org/abstracts/search?q=contextual%20architecture" title=" contextual architecture"> contextual architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=culture" title=" culture"> culture</a> </p> <a href="https://publications.waset.org/abstracts/36919/survey-of-the-role-of-contextualism-in-the-designing-of-cultural-constructions-based-on-rapoport-views" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36919.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">400</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">13438</span> Human Security and Human Trafficking Related Corruption</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ekin%20D.%20Horzum">Ekin D. Horzum</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The aim of the proposal is to examine the relationship between human trafficking related corruption and human security. The proposal suggests that the human trafficking related corruption is about willingness of the states to turn a blind eye to the human trafficking cases. Therefore, it is important to approach human trafficking related corruption in terms of human security and human rights violation to find an effective way to fight against human trafficking. In this context, the purpose of this proposal is to examine the human trafficking related corruption as a safe haven in which trafficking thrives for perpetrators. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20trafficking" title="human trafficking">human trafficking</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20security" title=" human security"> human security</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20rights" title=" human rights"> human rights</a>, <a href="https://publications.waset.org/abstracts/search?q=corruption" title=" corruption"> corruption</a>, <a href="https://publications.waset.org/abstracts/search?q=organized%20crime" title=" organized crime"> organized crime</a> </p> <a href="https://publications.waset.org/abstracts/5546/human-security-and-human-trafficking-related-corruption" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5546.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">475</span> </span> </div> </div> <ul class="pagination"> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20human%20interaction&amp;page=2" rel="prev">&lsaquo;</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20human%20interaction&amp;page=1">1</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20human%20interaction&amp;page=2">2</a></li> <li class="page-item active"><span class="page-link">3</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20human%20interaction&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20human%20interaction&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20human%20interaction&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20human%20interaction&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20human%20interaction&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20human%20interaction&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20human%20interaction&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20human%20interaction&amp;page=450">450</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20human%20interaction&amp;page=451">451</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20human%20interaction&amp;page=4" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10