CINXE.COM

Search results for: computer graphics

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: computer graphics</title> <meta name="description" content="Search results for: computer graphics"> <meta name="keywords" content="computer graphics"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="computer graphics" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="computer graphics"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 2457</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: computer graphics</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2457</span> Proposal of a Virtual Reality Dynamism Augmentation Method for Sports Spectating</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hertzog%20Clara">Hertzog Clara</a>, <a href="https://publications.waset.org/abstracts/search?q=Sakurai%20Sho"> Sakurai Sho</a>, <a href="https://publications.waset.org/abstracts/search?q=Hirota%20Koichi"> Hirota Koichi</a>, <a href="https://publications.waset.org/abstracts/search?q=Nojima%20Takuya"> Nojima Takuya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It is common to see graphics appearing on television while watching a sports game to provide information, but it is less common to see graphics specifically aiming to boost spectators’ dynamism perception. It is even less common to see such graphics designed especially for virtual reality (VR). However, it appears that even with simple dynamic graphics, it would be possible to improve VR sports spectators’ experience. So, in this research, we explain how graphics can be used in VR to improve the dynamism of a broadcasted sports game and we provide a simple example. This example consists in a white halo displayed around the video and blinking according to the game speed. We hope to increase people’s awareness about VR sports spectating and the possibilities this display offers through dynamic graphics. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=broadcasting" title="broadcasting">broadcasting</a>, <a href="https://publications.waset.org/abstracts/search?q=graphics" title=" graphics"> graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=sports%20spectating" title=" sports spectating"> sports spectating</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20reality" title=" virtual reality"> virtual reality</a> </p> <a href="https://publications.waset.org/abstracts/154152/proposal-of-a-virtual-reality-dynamism-augmentation-method-for-sports-spectating" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/154152.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2456</span> Examining the Functional and Practical Aspects of Iranian Painting as a Visual-Identity Language in Iranian Graphics</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Arezoo%20Seifollahi">Arezoo Seifollahi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the topics that is receiving a lot of attention in artistic circles and among Iran today and has been the subject of many conversations is the issue of Iranian graphics. In this research, the functional and practical aspects of Iranian painting as a visual-identity language in Iranian graphics have been investigated by relying on Iranian cultural and social posters in order to gain an understanding of the trend of contemporary graphic art in Iran and to help us reach the identity of graphics. In order to arrive at Iranian graphics, first, the issue of identity and what it is has been examined, and then this category has been addressed in Iran and throughout the history of this country in order to reveal the characteristics of the identity that has come to us today under the name of Iranian identity cognition. In the following, the search for Iranian identity in the art of this land, especially the art of painting, and then the art of contemporary painting and the search for identity in it have been discussed. After that, Iranian identity has been investigated in Iranian graphics. To understand Iranian graphics, after a brief description of its contemporary history, this art is examined at the considered time point. By using the inductive method of examining the posters of each course and taking into account the related cultural and social conditions, we tried to get a general and comprehensive understanding of the graphic features of each course. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Iranian%20painting" title="Iranian painting">Iranian painting</a>, <a href="https://publications.waset.org/abstracts/search?q=graphic%20visual%20language" title=" graphic visual language"> graphic visual language</a>, <a href="https://publications.waset.org/abstracts/search?q=Iranian%20identity" title=" Iranian identity"> Iranian identity</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20cultural%20poster" title=" social cultural poster"> social cultural poster</a> </p> <a href="https://publications.waset.org/abstracts/185508/examining-the-functional-and-practical-aspects-of-iranian-painting-as-a-visual-identity-language-in-iranian-graphics" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/185508.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">51</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2455</span> Android Graphics System: Study of Dual-Software VSync Synchronization Architecture and Optimization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Prafulla%20Kumar%20Choubey">Prafulla Kumar Choubey</a>, <a href="https://publications.waset.org/abstracts/search?q=Krishna%20Kishor%20Jha"> Krishna Kishor Jha</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20B.%20Vaisakh%20Punnekkattu%20Chirayil"> S. B. Vaisakh Punnekkattu Chirayil</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In Graphics-display subsystem, frame buffers are shared between producer i.e. content rendering and consumer i.e. display. If a common buffer is operated by both producer and consumer simultaneously, their processing rates mismatch can cause tearing effect in displayed content. Therefore, Android OS employs triple buffered system, taking in to account an additional composition stage. Three stages-rendering, composition and display refresh, operate synchronously on three different buffers, which is achieved by using vsync pulses. This synchronization, however, brings in to the pipeline an additional latency of up to 26ms. The present study details about the existing synchronization mechanism of android graphics-display pipeline and discusses a new adaptive architecture which reduces the wait time to 5ms-16ms in all the use-cases. The proposed method uses two adaptive software vsyncs (PLL) for achieving the same result. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Android%20graphics%20system" title="Android graphics system">Android graphics system</a>, <a href="https://publications.waset.org/abstracts/search?q=vertical%20synchronization" title=" vertical synchronization"> vertical synchronization</a>, <a href="https://publications.waset.org/abstracts/search?q=atrace" title=" atrace"> atrace</a>, <a href="https://publications.waset.org/abstracts/search?q=adaptive%20system" title=" adaptive system"> adaptive system</a> </p> <a href="https://publications.waset.org/abstracts/38338/android-graphics-system-study-of-dual-software-vsync-synchronization-architecture-and-optimization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/38338.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">314</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2454</span> Combining Real Actors with Virtual Sets: The Future of Immersive Virtual Reality Fiction Cinema</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nefeli%20Dimitriadi">Nefeli Dimitriadi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper aims to present immersive cinema where real actors are filmed and integrated in Virtual Reality environments and 360 cinematic narrative, in comparison to 360 filming of real actors and sets and to fully computer graphics animation movies with 3D avatars. Objectives: This reseach aims to present immersive cinema where real actors are integrated in Virrual Reality environments and 360 cinematic narrative as the future of immersive cinema. Meghdology: A comparative analysis is conducted between real actors filming combined with Virtual Reality sets, to 360 filming of real actors and sets, and to fully computer graphics animation movies with 3D avatars, using as case study Virtual Reality movie Neurosynapses and others. Contribution: This reseach contributes in defining the best practices leading to impactful Immersive cinematic narratives. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=virtual%20reality" title="virtual reality">virtual reality</a>, <a href="https://publications.waset.org/abstracts/search?q=360%20movies" title=" 360 movies"> 360 movies</a>, <a href="https://publications.waset.org/abstracts/search?q=immersive%20cinema" title=" immersive cinema"> immersive cinema</a>, <a href="https://publications.waset.org/abstracts/search?q=directing%20for%20virtual%20reality" title=" directing for virtual reality"> directing for virtual reality</a> </p> <a href="https://publications.waset.org/abstracts/156081/combining-real-actors-with-virtual-sets-the-future-of-immersive-virtual-reality-fiction-cinema" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/156081.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">120</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2453</span> Hearing Aids Maintenance Training for Hearing-Impaired Preschool Children with the Help of Motion Graphic Tools</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Mokhtarzadeh">M. Mokhtarzadeh</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Taheri%20Qomi"> M. Taheri Qomi</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Nikafrooz"> M. Nikafrooz</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Atashafrooz"> A. Atashafrooz</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The purpose of the present study was to investigate the effectiveness of using motion graphics as a learning medium on training hearing aids maintenance skills to hearing-impaired children. The statistical population of this study consisted of all children with hearing loss in Ahvaz city, at age 4 to 7 years old. As the sample, 60, whom were selected by multistage random sampling, were randomly assigned to two groups; experimental (30 children) and control (30 children) groups. The research method was experimental and the design was pretest-posttest with the control group. The intervention consisted of a 2-minute motion graphics clip to train hearing aids maintenance skills. Data were collected using a 9-question researcher-made questionnaire. The data were analyzed by using one-way analysis of covariance. Results showed that the training of hearing aids maintenance skills with motion graphics was significantly effective for those children. The results of this study can be used by educators, teachers, professionals, and parents to train children with disabilities or normal students. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=hearing%20aids" title="hearing aids">hearing aids</a>, <a href="https://publications.waset.org/abstracts/search?q=hearing%20aids%20maintenance%20skill" title=" hearing aids maintenance skill"> hearing aids maintenance skill</a>, <a href="https://publications.waset.org/abstracts/search?q=hearing%20impaired%20children" title=" hearing impaired children"> hearing impaired children</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20graphics" title=" motion graphics"> motion graphics</a> </p> <a href="https://publications.waset.org/abstracts/124635/hearing-aids-maintenance-training-for-hearing-impaired-preschool-children-with-the-help-of-motion-graphic-tools" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/124635.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">158</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2452</span> Classification of Computer Generated Images from Photographic Images Using Convolutional Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Chaitanya%20Chawla">Chaitanya Chawla</a>, <a href="https://publications.waset.org/abstracts/search?q=Divya%20Panwar"> Divya Panwar</a>, <a href="https://publications.waset.org/abstracts/search?q=Gurneesh%20Singh%20Anand"> Gurneesh Singh Anand</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20P.%20S%20Bhatia"> M. P. S Bhatia</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a deep-learning mechanism for classifying computer generated images and photographic images. The proposed method accounts for a convolutional layer capable of automatically learning correlation between neighbouring pixels. In the current form, Convolutional Neural Network (CNN) will learn features based on an image&#39;s content instead of the structural features of the image. The layer is particularly designed to subdue an image&#39;s content and robustly learn the sensor pattern noise features (usually inherited from image processing in a camera) as well as the statistical properties of images. The paper was assessed on latest natural and computer generated images, and it was concluded that it performs better than the current state of the art methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=image%20forensics" title="image forensics">image forensics</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20graphics" title=" computer graphics"> computer graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=classification" title=" classification"> classification</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/95266/classification-of-computer-generated-images-from-photographic-images-using-convolutional-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/95266.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">336</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2451</span> Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Delgado">S. Delgado</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20Cerrada"> C. Cerrada</a>, <a href="https://publications.waset.org/abstracts/search?q=R.%20S.%20G%C3%B3mez"> R. S. Gómez</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=voxelization" title="voxelization">voxelization</a>, <a href="https://publications.waset.org/abstracts/search?q=GPU%20acceleration" title=" GPU acceleration"> GPU acceleration</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20graphics" title=" computer graphics"> computer graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=compute%20shaders" title=" compute shaders"> compute shaders</a> </p> <a href="https://publications.waset.org/abstracts/171908/closing-the-gap-efficient-voxelization-with-equidistant-scanlines-and-gap-detection" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171908.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">73</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2450</span> Computational Tool for Surface Electromyography Analysis; an Easy Way for Non-Engineers</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Fabiano%20Araujo%20Soares">Fabiano Araujo Soares</a>, <a href="https://publications.waset.org/abstracts/search?q=Sauro%20Emerick%20Salomoni"> Sauro Emerick Salomoni</a>, <a href="https://publications.waset.org/abstracts/search?q=Joao%20Paulo%20Lima%20da%20Silva"> Joao Paulo Lima da Silva</a>, <a href="https://publications.waset.org/abstracts/search?q=Igor%20Luiz%20Moura"> Igor Luiz Moura</a>, <a href="https://publications.waset.org/abstracts/search?q=Adson%20Ferreira%20da%20Rocha"> Adson Ferreira da Rocha </a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a tool developed in the Matlab platform. It was developed to simplify the analysis of surface electromyography signals (S-EMG) in a way accessible to users that are not familiarized with signal processing procedures. The tool receives data by commands in window fields and generates results as graphics and excel tables. The underlying math of each S-EMG estimator is presented. Setup window and result graphics are presented. The tool was presented to four non-engineer users and all of them managed to appropriately use it after a 5 minutes instruction period. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=S-EMG%20estimators" title="S-EMG estimators">S-EMG estimators</a>, <a href="https://publications.waset.org/abstracts/search?q=electromyography" title=" electromyography"> electromyography</a>, <a href="https://publications.waset.org/abstracts/search?q=surface%20electromyography" title=" surface electromyography"> surface electromyography</a>, <a href="https://publications.waset.org/abstracts/search?q=ARV" title=" ARV"> ARV</a>, <a href="https://publications.waset.org/abstracts/search?q=RMS" title=" RMS"> RMS</a>, <a href="https://publications.waset.org/abstracts/search?q=MDF" title=" MDF"> MDF</a>, <a href="https://publications.waset.org/abstracts/search?q=MNF" title=" MNF"> MNF</a>, <a href="https://publications.waset.org/abstracts/search?q=CV" title=" CV"> CV</a> </p> <a href="https://publications.waset.org/abstracts/33667/computational-tool-for-surface-electromyography-analysis-an-easy-way-for-non-engineers" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33667.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">559</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2449</span> Acceleration of Lagrangian and Eulerian Flow Solvers via Graphics Processing Units</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pooya%20Niksiar">Pooya Niksiar</a>, <a href="https://publications.waset.org/abstracts/search?q=Ali%20Ashrafizadeh"> Ali Ashrafizadeh</a>, <a href="https://publications.waset.org/abstracts/search?q=Mehrzad%20Shams"> Mehrzad Shams</a>, <a href="https://publications.waset.org/abstracts/search?q=Amir%20Hossein%20Madani"> Amir Hossein Madani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There are many computationally demanding applications in science and engineering which need efficient algorithms implemented on high performance computers. Recently, Graphics Processing Units (GPUs) have drawn much attention as compared to the traditional CPU-based hardware and have opened up new improvement venues in scientific computing. One particular application area is Computational Fluid Dynamics (CFD), in which mature CPU-based codes need to be converted to GPU-based algorithms to take advantage of this new technology. In this paper, numerical solutions of two classes of discrete fluid flow models via both CPU and GPU are discussed and compared. Test problems include an Eulerian model of a two-dimensional incompressible laminar flow case and a Lagrangian model of a two phase flow field. The CUDA programming standard is used to employ an NVIDIA GPU with 480 cores and a C++ serial code is run on a single core Intel quad-core CPU. Up to two orders of magnitude speed up is observed on GPU for a certain range of grid resolution or particle numbers. As expected, Lagrangian formulation is better suited for parallel computations on GPU although Eulerian formulation represents significant speed up too. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CFD" title="CFD">CFD</a>, <a href="https://publications.waset.org/abstracts/search?q=Eulerian%20formulation" title=" Eulerian formulation"> Eulerian formulation</a>, <a href="https://publications.waset.org/abstracts/search?q=graphics%20processing%20units" title=" graphics processing units"> graphics processing units</a>, <a href="https://publications.waset.org/abstracts/search?q=Lagrangian%20formulation" title=" Lagrangian formulation"> Lagrangian formulation</a> </p> <a href="https://publications.waset.org/abstracts/4118/acceleration-of-lagrangian-and-eulerian-flow-solvers-via-graphics-processing-units" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4118.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">416</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2448</span> A Review on Light Shafts Rendering for Indoor Scenes</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hatam%20H.%20Ali">Hatam H. Ali</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Shahrizal%20Sunar"> Mohd Shahrizal Sunar</a>, <a href="https://publications.waset.org/abstracts/search?q=Hoshang%20Kolivand"> Hoshang Kolivand</a>, <a href="https://publications.waset.org/abstracts/search?q=Mohd%20Azhar%20Bin%20M.%20Arsad"> Mohd Azhar Bin M. Arsad</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Rendering light shafts is one of the important topics in computer gaming and interactive applications. The methods and models that are used to generate light shafts play crucial role to make a scene more realistic in computer graphics. This article discusses the image-based shadows and geometric-based shadows that contribute in generating volumetric shadows and light shafts, depending on ray tracing, radiosity, and ray marching technique. The main aim of this study is to provide researchers with background on a progress of light scattering methods so as to make it available for them to determine the technique best suited to their goals. It is also hoped that our classification helps researchers find solutions to the shortcomings of each method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=shaft%20of%20lights" title="shaft of lights">shaft of lights</a>, <a href="https://publications.waset.org/abstracts/search?q=realistic%20images" title=" realistic images"> realistic images</a>, <a href="https://publications.waset.org/abstracts/search?q=image-based" title=" image-based"> image-based</a>, <a href="https://publications.waset.org/abstracts/search?q=and%20geometric-based" title=" and geometric-based"> and geometric-based</a> </p> <a href="https://publications.waset.org/abstracts/46822/a-review-on-light-shafts-rendering-for-indoor-scenes" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/46822.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">279</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2447</span> Autism Disease Detection Using Transfer Learning Techniques: Performance Comparison between Central Processing Unit vs. Graphics Processing Unit Functions for Neural Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mst%20Shapna%20Akter">Mst Shapna Akter</a>, <a href="https://publications.waset.org/abstracts/search?q=Hossain%20Shahriar"> Hossain Shahriar</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Neural network approaches are machine learning methods used in many domains, such as healthcare and cyber security. Neural networks are mostly known for dealing with image datasets. While training with the images, several fundamental mathematical operations are carried out in the Neural Network. The operation includes a number of algebraic and mathematical functions, including derivative, convolution, and matrix inversion and transposition. Such operations require higher processing power than is typically needed for computer usage. Central Processing Unit (CPU) is not appropriate for a large image size of the dataset as it is built with serial processing. While Graphics Processing Unit (GPU) has parallel processing capabilities and, therefore, has higher speed. This paper uses advanced Neural Network techniques such as VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBOOST-VGG16, and our proposed models to compare CPU and GPU resources. A system for classifying autism disease using face images of an autistic and non-autistic child was used to compare performance during testing. We used evaluation matrices such as Accuracy, F1 score, Precision, Recall, and Execution time. It has been observed that GPU runs faster than the CPU in all tests performed. Moreover, the performance of the Neural Network models in terms of accuracy increases on GPU compared to CPU. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autism%20disease" title="autism disease">autism disease</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20network" title=" neural network"> neural network</a>, <a href="https://publications.waset.org/abstracts/search?q=CPU" title=" CPU"> CPU</a>, <a href="https://publications.waset.org/abstracts/search?q=GPU" title=" GPU"> GPU</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a> </p> <a href="https://publications.waset.org/abstracts/160218/autism-disease-detection-using-transfer-learning-techniques-performance-comparison-between-central-processing-unit-vs-graphics-processing-unit-functions-for-neural-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160218.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">118</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2446</span> Generative Adversarial Network for Bidirectional Mappings between Retinal Fundus Images and Vessel Segmented Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Haoqi%20Gao">Haoqi Gao</a>, <a href="https://publications.waset.org/abstracts/search?q=Koichi%20Ogawara"> Koichi Ogawara</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Retinal vascular segmentation of color fundus is the basis of ophthalmic computer-aided diagnosis and large-scale disease screening systems. Early screening of fundus diseases has great value for clinical medical diagnosis. The traditional methods depend on the experience of the doctor, which is time-consuming, labor-intensive, and inefficient. Furthermore, medical images are scarce and fraught with legal concerns regarding patient privacy. In this paper, we propose a new Generative Adversarial Network based on CycleGAN for retinal fundus images. This method can generate not only synthetic fundus images but also generate corresponding segmentation masks, which has certain application value and challenge in computer vision and computer graphics. In the results, we evaluate our proposed method from both quantitative and qualitative. For generated segmented images, our method achieves dice coefficient of 0.81 and PR of 0.89 on DRIVE dataset. For generated synthetic fundus images, we use ”Toy Experiment” to verify the state-of-the-art performance of our method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=retinal%20vascular%20segmentations" title="retinal vascular segmentations">retinal vascular segmentations</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20ad-versarial%20network" title=" generative ad-versarial network"> generative ad-versarial network</a>, <a href="https://publications.waset.org/abstracts/search?q=cyclegan" title=" cyclegan"> cyclegan</a>, <a href="https://publications.waset.org/abstracts/search?q=fundus%20images" title=" fundus images"> fundus images</a> </p> <a href="https://publications.waset.org/abstracts/110591/generative-adversarial-network-for-bidirectional-mappings-between-retinal-fundus-images-and-vessel-segmented-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/110591.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">144</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2445</span> The Influence of 3D Printing Course on Middle School Students&#039; Spatial Thinking Ability</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Wang%20Xingjuan">Wang Xingjuan</a>, <a href="https://publications.waset.org/abstracts/search?q=Qian%20Dongming"> Qian Dongming</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As a common thinking ability, spatial thinking ability plays an increasingly important role in the information age. The key to cultivating students' spatial thinking ability is to cultivate students' ability to process and transform graphics. The 3D printing course enables students to constantly touch the rotation and movement of objects during the modeling process and to understand spatial graphics from different views. To this end, this article combines the classic PSVT: R test to explore the impact of 3D printing courses on the spatial thinking ability of middle school students. The results of the study found that: (1) Through the study of the 3D printing course, the students' spatial ability test scores have been significantly improved, which indirectly reflects the improvement of the spatial thinking ability level. (2) The student's spatial thinking ability test results are influenced by the parent's occupation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20printing" title="3D printing">3D printing</a>, <a href="https://publications.waset.org/abstracts/search?q=middle%20school%20students" title=" middle school students"> middle school students</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20thinking%20ability" title=" spatial thinking ability"> spatial thinking ability</a>, <a href="https://publications.waset.org/abstracts/search?q=influence" title=" influence"> influence</a> </p> <a href="https://publications.waset.org/abstracts/109150/the-influence-of-3d-printing-course-on-middle-school-students-spatial-thinking-ability" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/109150.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">190</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2444</span> Role of Web Graphics and Interface in Creating Visitor Trust</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Pramika%20J.%20Muthya">Pramika J. Muthya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigates the impact of web graphics and interface design on building visitor trust in websites. A quantitative survey approach was used to examine how aesthetic and usability elements of website design influence user perceptions of trustworthiness. 133 participants aged 18-25 who live in urban Bangalore and engage in online transactions were recruited via convenience sampling. Data was collected through an online survey measuring trust levels based on website design, using validated constructs like the Visual Aesthetic of Websites Inventory (VisAWI). Statistical analysis, including ordinal regression, was conducted to analyze the results. The findings show a statistically significant relationship between web graphics and interface design and the level of trust visitors place in a website. The goodness-of-fit statistics and highly significant model fitting information provide strong evidence for rejecting the null hypothesis of no relationship. Well-designed visual aesthetics like simplicity, diversity, colorfulness, and craftsmanship are key drivers of perceived credibility. Intuitive navigation and usability also increase trust. The results emphasize the strategic importance for companies to invest in appealing graphic design, consistent with existing theoretical frameworks. There are also implications for taking a user-centric approach to web design and acknowledging the reciprocal link between pre-existing user trust and perception of visuals. While generalizable, limitations include possible sampling and self-report biases. Further research can build on these findings to deepen understanding of nuanced cultural and temporal factors influencing online trust. Overall, this study makes a significant contribution by providing empirical evidence that reinforces the crucial impact of thoughtful graphic design in fostering lasting user trust in websites. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=web%20graphics" title="web graphics">web graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=interface%20design" title=" interface design"> interface design</a>, <a href="https://publications.waset.org/abstracts/search?q=visitor%20trust" title=" visitor trust"> visitor trust</a>, <a href="https://publications.waset.org/abstracts/search?q=website%20design" title=" website design"> website design</a>, <a href="https://publications.waset.org/abstracts/search?q=aesthetics" title=" aesthetics"> aesthetics</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20experience" title=" user experience"> user experience</a>, <a href="https://publications.waset.org/abstracts/search?q=online%20trust" title=" online trust"> online trust</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20design" title=" visual design"> visual design</a>, <a href="https://publications.waset.org/abstracts/search?q=graphic%20design" title=" graphic design"> graphic design</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20perceptions" title=" user perceptions"> user perceptions</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20expectations" title=" user expectations"> user expectations</a> </p> <a href="https://publications.waset.org/abstracts/182260/role-of-web-graphics-and-interface-in-creating-visitor-trust" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/182260.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">51</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2443</span> Password Cracking on Graphics Processing Unit Based Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20Gopalakrishna%20Kini">N. Gopalakrishna Kini</a>, <a href="https://publications.waset.org/abstracts/search?q=Ranjana%20Paleppady"> Ranjana Paleppady</a>, <a href="https://publications.waset.org/abstracts/search?q=Akshata%20K.%20Naik"> Akshata K. Naik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Password authentication is one of the widely used methods to achieve authentication for legal users of computers and defense against attackers. There are many different ways to authenticate users of a system and there are many password cracking methods also developed. This paper is mainly to propose how best password cracking can be performed on a CPU-GPGPU based system. The main objective of this work is to project how quickly a password can be cracked with some knowledge about the computer security and password cracking if sufficient security is not incorporated to the system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GPGPU" title="GPGPU">GPGPU</a>, <a href="https://publications.waset.org/abstracts/search?q=password%20cracking" title=" password cracking"> password cracking</a>, <a href="https://publications.waset.org/abstracts/search?q=secret%20key" title=" secret key"> secret key</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20authentication" title=" user authentication"> user authentication</a> </p> <a href="https://publications.waset.org/abstracts/40190/password-cracking-on-graphics-processing-unit-based-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/40190.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">290</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2442</span> Use of Computer and Machine Learning in Facial Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Neha%20Singh">Neha Singh</a>, <a href="https://publications.waset.org/abstracts/search?q=Ananya%20Arora"> Ananya Arora</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Facial expression measurement plays a crucial role in the identification of emotion. Facial expression plays a key role in psychophysiology, neural bases, and emotional disorder, to name a few. The Facial Action Coding System (FACS) has proven to be the most efficient and widely used of the various systems used to describe facial expressions. Coders can manually code facial expressions with FACS and, by viewing video-recorded facial behaviour at a specified frame rate and slow motion, can decompose into action units (AUs). Action units are the most minor visually discriminable facial movements. FACS explicitly differentiates between facial actions and inferences about what the actions mean. Action units are the fundamental unit of FACS methodology. It is regarded as the standard measure for facial behaviour and finds its application in various fields of study beyond emotion science. These include facial neuromuscular disorders, neuroscience, computer vision, computer graphics and animation, and face encoding for digital processing. This paper discusses the conceptual basis for FACS, a numerical listing of discrete facial movements identified by the system, the system's psychometric evaluation, and the software's recommended training requirements. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=facial%20action" title="facial action">facial action</a>, <a href="https://publications.waset.org/abstracts/search?q=action%20units" title=" action units"> action units</a>, <a href="https://publications.waset.org/abstracts/search?q=coding" title=" coding"> coding</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a> </p> <a href="https://publications.waset.org/abstracts/161142/use-of-computer-and-machine-learning-in-facial-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/161142.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">106</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2441</span> Digital Media Market, Multimedia, and Computer Graphic Analysis Amidst Fluctuating Global and Local Scale Economy</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Essang%20Anwana%20Onuntuei">Essang Anwana Onuntuei</a>, <a href="https://publications.waset.org/abstracts/search?q=Chinyere%20Blessing%20Azunwoke"> Chinyere Blessing Azunwoke</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The study centred on investigating the influence of multimedia systems and computer graphic design on global and local scale economies. Firstly, the study pinpointed the significant participants and top five global digital media distribution in the digital media market. Then, the study investigated whether a tie or variance existed between the digital media vendor and market shares. Also, the paper probed whether the global and local desktop, mobile, and tablet markets differ while assessing the association between the top five digital media and global market shares. Finally, the study explored the extent of growth, economic gains, major setbacks, and opportunities within the industry amidst global and local scale economic flux. A multiple regression analysis method was employed to analyse the significant influence of the top five global digital media on the total market share, and the Analysis of Variance (ANOVA) was used to analyse the global digital media vendor market share data. The findings were intriguing and significant. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20graphics" title="computer graphics">computer graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20media%20market" title=" digital media market"> digital media market</a>, <a href="https://publications.waset.org/abstracts/search?q=global%20market%20share" title=" global market share"> global market share</a>, <a href="https://publications.waset.org/abstracts/search?q=market%20size" title=" market size"> market size</a>, <a href="https://publications.waset.org/abstracts/search?q=media%20vendor" title=" media vendor"> media vendor</a>, <a href="https://publications.waset.org/abstracts/search?q=multimedia" title=" multimedia"> multimedia</a>, <a href="https://publications.waset.org/abstracts/search?q=social%20media" title=" social media"> social media</a>, <a href="https://publications.waset.org/abstracts/search?q=systems%20design" title=" systems design"> systems design</a> </p> <a href="https://publications.waset.org/abstracts/188472/digital-media-market-multimedia-and-computer-graphic-analysis-amidst-fluctuating-global-and-local-scale-economy" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/188472.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">32</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2440</span> Relational Attention Shift on Images Using Bu-Td Architecture and Sequential Structure Revealing</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alona%20Faktor">Alona Faktor</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we present a NN-based computational model that can perform attention shifts according to high-level instruction. The instruction specifies the type of attentional shift using explicit geometrical relation. The instruction also can be of cognitive nature, specifying more complex human-human interaction or human-object interaction, or object-object interaction. Applying this approach sequentially allows obtaining a structural description of an image. A novel data-set of interacting humans and objects is constructed using a computer graphics engine. Using this data, we perform systematic research of relational segmentation shifts. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=cognitive%20science" title="cognitive science">cognitive science</a>, <a href="https://publications.waset.org/abstracts/search?q=attentin" title=" attentin"> attentin</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=generalization" title=" generalization"> generalization</a> </p> <a href="https://publications.waset.org/abstracts/135787/relational-attention-shift-on-images-using-bu-td-architecture-and-sequential-structure-revealing" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/135787.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">198</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2439</span> Digital Literacy Skills for Geologist in Public Sector</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Angsumalin%20Puntho">Angsumalin Puntho</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Disruptive technology has had a great influence on our everyday lives and the existence of an organization. Geologists in the public sector need to keep up with digital technology and be able to work and collaborate in a more effective manner. The result from SWOT and 7S McKinsey analyses suggest that there are inadequate IT personnel, no individual digital literacy development plan, and a misunderstanding of management policies. The Office of Civil Service Commission develops digital literacy skills that civil servants and government officers should possess in order to work effectively; it consists of nine dimensions, including computer skills, internet skills, cyber security awareness, word processing, spreadsheets, presentation programs, online collaboration, graphics editors and cyber security practices; and six steps of digital literacy development including self-assessment, individual development plan, self-learning, certified test, learning reflection, and practices. Geologists can use digital literacy as a learning tool to develop themselves for better career opportunities. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=disruptive%20technology" title="disruptive technology">disruptive technology</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20technology" title=" digital technology"> digital technology</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20literacy" title=" digital literacy"> digital literacy</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20skills" title=" computer skills"> computer skills</a> </p> <a href="https://publications.waset.org/abstracts/152172/digital-literacy-skills-for-geologist-in-public-sector" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152172.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">116</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2438</span> Virtual Player for Learning by Observation to Assist Karate Training</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Kazumoto%20Tanaka">Kazumoto Tanaka</a> </p> <p class="card-text"><strong>Abstract:</strong></p> It is well known that sport skill learning is facilitated by video observation of players’ actions in sports. The optimal viewpoint for the observation of actions depends on sport scenes. On the other hand, it is impossible to change viewpoint for the observation in general, because most videos are filmed from fixed points. The study has tackled the problem and focused on karate match as a first step. The study developed a method for observing karate player’s actions from any point of view by using 3D-CG model (i.e. virtual player) obtained from video images, and verified the effectiveness of the method on karate match. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20graphics" title="computer graphics">computer graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=karate%20training" title=" karate training"> karate training</a>, <a href="https://publications.waset.org/abstracts/search?q=learning%20by%20observation" title=" learning by observation"> learning by observation</a>, <a href="https://publications.waset.org/abstracts/search?q=motion%20capture" title=" motion capture"> motion capture</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20player" title=" virtual player"> virtual player</a> </p> <a href="https://publications.waset.org/abstracts/61463/virtual-player-for-learning-by-observation-to-assist-karate-training" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61463.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">275</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2437</span> Three-Dimensional Computer Graphical Demonstration of Calcified Tissue and Its Clinical Significance</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Itsuo%20Yokoyama">Itsuo Yokoyama</a>, <a href="https://publications.waset.org/abstracts/search?q=Rikako%20Kikuti"> Rikako Kikuti</a>, <a href="https://publications.waset.org/abstracts/search?q=Miti%20Sekikawa"> Miti Sekikawa</a>, <a href="https://publications.waset.org/abstracts/search?q=Tosinori%20Asai"> Tosinori Asai</a>, <a href="https://publications.waset.org/abstracts/search?q=Sarai%20Tsuyoshi"> Sarai Tsuyoshi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Introduction: Vascular access for hemodialysis therapy is often difficult, even for experienced medical personnel. Ultrasound guided needle placement have been performed occasionally but is not always helpful in certain cases with complicated vascular anatomy. Obtaining precise anatomical knowledge of the vascular structure is important to prevent access-related complications. With augmented reality (AR) device such as AR glasses, the virtual vascular structure is shown superimposed on the actual patient vessels, thus enabling the operator to maneuver catheter placement easily with free both hands. We herein report our method of AR guided vascular access method in dialysis treatment Methods: Three dimensional (3D) object of the arm with arteriovenous fistula is computer graphically created with 3D software from the data obtained by computer tomography, ultrasound echogram, and image scanner. The 3D vascular object thus created is viewed on the screen of the AR digital display device (such as AR glass or iPad). The picture of the vascular anatomical structure becomes visible, which is superimposed over the real patient’s arm, thereby the needle insertion be performed under the guidance of AR visualization with ease. By this method, technical difficulty in catheter placement for dialysis can be lessened and performed safely. Considerations: Virtual reality technology has been applied in various fields and medical use is not an exception. Yet AR devices have not been widely used among medical professions. Visualization of the virtual vascular object can be achieved by creation of accurate three dimensional object with the help of computer graphical technique. Although our experience is limited, this method is applicable with relative easiness and our accumulating evidence has suggested that our method of vascular access with the use of AR can be promising. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=abdominal-aorta" title="abdominal-aorta">abdominal-aorta</a>, <a href="https://publications.waset.org/abstracts/search?q=calcification" title=" calcification"> calcification</a>, <a href="https://publications.waset.org/abstracts/search?q=extraskeletal" title=" extraskeletal"> extraskeletal</a>, <a href="https://publications.waset.org/abstracts/search?q=dialysis" title=" dialysis"> dialysis</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20graphics" title=" computer graphics"> computer graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=3DCG" title=" 3DCG"> 3DCG</a>, <a href="https://publications.waset.org/abstracts/search?q=CT" title=" CT"> CT</a>, <a href="https://publications.waset.org/abstracts/search?q=calcium" title=" calcium"> calcium</a>, <a href="https://publications.waset.org/abstracts/search?q=phosphorus" title=" phosphorus"> phosphorus</a> </p> <a href="https://publications.waset.org/abstracts/152713/three-dimensional-computer-graphical-demonstration-of-calcified-tissue-and-its-clinical-significance" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/152713.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">164</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2436</span> Numerical Study on Parallel Rear-Spoiler on Super Cars</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Anshul%20Ashu">Anshul Ashu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Computers are applied to the vehicle aerodynamics in two ways. One of two is Computational Fluid Dynamics (CFD) and other is Computer Aided Flow Visualization (CAFV). Out of two CFD is chosen because it shows the result with computer graphics. The simulation of flow field around the vehicle is one of the important CFD applications. The flow field can be solved numerically using panel methods, k-ε method, and direct simulation methods. The spoiler is the tool in vehicle aerodynamics used to minimize unfavorable aerodynamic effects around the vehicle and the parallel spoiler is set of two spoilers which are designed in such a manner that it could effectively reduce the drag. In this study, the standard k-ε model of the simplified version of Bugatti Veyron, Audi R8 and Porsche 911 are used to simulate the external flow field. Flow simulation is done for variable Reynolds number. The flow simulation consists of three different levels, first over the model without a rear spoiler, second for over model with single rear spoiler, and third over the model with parallel rear-spoiler. The second and third level has following parameter: the shape of the spoiler, the angle of attack and attachment position. A thorough analysis of simulations results has been found. And a new parallel spoiler is designed. It shows a little improvement in vehicle aerodynamics with a decrease in vehicle aerodynamic drag and lift. Hence, it leads to good fuel economy and traction force of the model. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=drag" title="drag">drag</a>, <a href="https://publications.waset.org/abstracts/search?q=lift" title=" lift"> lift</a>, <a href="https://publications.waset.org/abstracts/search?q=flow%20simulation" title=" flow simulation"> flow simulation</a>, <a href="https://publications.waset.org/abstracts/search?q=spoiler" title=" spoiler"> spoiler</a> </p> <a href="https://publications.waset.org/abstracts/31545/numerical-study-on-parallel-rear-spoiler-on-super-cars" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31545.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">500</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2435</span> Development of a Catalogs System for Augmented Reality Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=J.%20Ierache">J. Ierache</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20A.%20Mangiarua"> N. A. Mangiarua</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20A.%20Bevacqua"> S. A. Bevacqua</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20N.%20Verdicchio"> N. N. Verdicchio</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20E.%20Becerra"> M. E. Becerra</a>, <a href="https://publications.waset.org/abstracts/search?q=D.%20R.%20Sanz"> D. R. Sanz</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20E.%20Sena"> M. E. Sena</a>, <a href="https://publications.waset.org/abstracts/search?q=F.%20M.%20Ortiz"> F. M. Ortiz</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20D.%20Duarte"> N. D. Duarte</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Igarza"> S. Igarza</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Augmented Reality is a technology that involves the overlay of virtual content, which is context or environment sensitive, on images of the physical world in real time. This paper presents the development of a catalog system that facilitates and allows the creation, publishing, management and exploitation of augmented multimedia contents and Augmented Reality applications, creating an own space for anyone that wants to provide information to real objects in order to edit and share it then online with others. These spaces would be built for different domains without the initial need of expert users. Its operation focuses on the context of Web 2.0 or Social Web, with its various applications, developing contents to enrich the real context in which human beings act permitting the evolution of catalog’s contents in an emerging way. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=augmented%20reality" title="augmented reality">augmented reality</a>, <a href="https://publications.waset.org/abstracts/search?q=catalog%20system" title=" catalog system"> catalog system</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20graphics" title=" computer graphics"> computer graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20application" title=" mobile application"> mobile application</a> </p> <a href="https://publications.waset.org/abstracts/20787/development-of-a-catalogs-system-for-augmented-reality-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20787.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">352</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2434</span> Optimizing SCADA/RTU Control System Alarms for Gas Wells</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mohammed%20Ali%20Faqeeh">Mohammed Ali Faqeeh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> SCADA System Alarms Optimization Process has been introduced recently and applied accordingly in different implemented stages. First, MODBUS communication protocols between RTU/SCADA were improved at the level of I/O points scanning intervals. Then, some of the technical issues related to manufacturing limitations were resolved. Afterward, another approach was followed to take a decision on the configured alarms database. So, a couple of meetings and workshops were held among all system stakeholders, which resulted in an agreement of disabling unnecessary (Diagnostic) alarms. Moreover, a leap forward step was taken to segregate the SCADA Operator Graphics in a way to show only process-related alarms while some other graphics will ensure the availability of field alarms related to maintenance and engineering purposes. This overall system management and optimization have resulted in a huge effective impact on all operations, maintenance, and engineering. It has reduced unneeded open tickets for maintenance crews which led to reduce the driven mileages accordingly. Also, this practice has shown a good impression on the operation reactions and response to the emergency situations as the SCADA operators can be staying much vigilant on the real alarms rather than gets distracted by noisy ones. SCADA System Alarms Optimization process has been executed utilizing all applicable in-house resources among engineering, maintenance, and operations crews. The methodology of the entire enhanced scopes is performed through various stages. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SCADA" title="SCADA">SCADA</a>, <a href="https://publications.waset.org/abstracts/search?q=RTU%20Communication" title=" RTU Communication"> RTU Communication</a>, <a href="https://publications.waset.org/abstracts/search?q=alarm%20management%20system" title=" alarm management system"> alarm management system</a>, <a href="https://publications.waset.org/abstracts/search?q=SCADA%20alarms" title=" SCADA alarms"> SCADA alarms</a>, <a href="https://publications.waset.org/abstracts/search?q=Modbus" title=" Modbus"> Modbus</a>, <a href="https://publications.waset.org/abstracts/search?q=DNP%20protocol" title=" DNP protocol"> DNP protocol</a> </p> <a href="https://publications.waset.org/abstracts/142519/optimizing-scadartu-control-system-alarms-for-gas-wells" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/142519.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">166</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2433</span> Restoration of Digital Design Using Row and Column Major Parsing Technique from the Old/Used Jacquard Punched Cards</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=R.%20Kumaravelu">R. Kumaravelu</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Poornima"> S. Poornima</a>, <a href="https://publications.waset.org/abstracts/search?q=Sunil%20Kumar%20Kashyap"> Sunil Kumar Kashyap</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The optimized and digitalized restoration of the information from the old and used manual jacquard punched card in textile industry is referred to as Jacquard Punch Card (JPC) reader. In this paper, we present a novel design and development of photo electronics based system for reading old and used punched cards and storing its binary information for transforming them into an effective image file format. In our textile industry the jacquard punched cards holes diameters having the sizes of 3mm, 5mm and 5.5mm pitch. Before the adaptation of computing systems in the field of textile industry those punched cards were prepared manually without digital design source, but those punched cards are having rich woven designs. Now, the idea is to retrieve binary information from the jacquard punched cards and store them in digital (Non-Graphics) format before processing it. After processing the digital format (Non-Graphics) it is converted into an effective image file format through either by Row major or Column major parsing technique.To accomplish these activities, an embedded system based device and software integration is developed. As part of the test and trial activity the device was tested and installed for industrial service at Weavers Service Centre, Kanchipuram, Tamilnadu in India. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=file%20system" title="file system">file system</a>, <a href="https://publications.waset.org/abstracts/search?q=SPI.%20UART" title=" SPI. UART"> SPI. UART</a>, <a href="https://publications.waset.org/abstracts/search?q=ARM%20controller" title=" ARM controller"> ARM controller</a>, <a href="https://publications.waset.org/abstracts/search?q=jacquard" title=" jacquard"> jacquard</a>, <a href="https://publications.waset.org/abstracts/search?q=punched%20card" title=" punched card"> punched card</a>, <a href="https://publications.waset.org/abstracts/search?q=photo%20LED" title=" photo LED"> photo LED</a>, <a href="https://publications.waset.org/abstracts/search?q=photo%20diode" title=" photo diode"> photo diode</a> </p> <a href="https://publications.waset.org/abstracts/96597/restoration-of-digital-design-using-row-and-column-major-parsing-technique-from-the-oldused-jacquard-punched-cards" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/96597.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">167</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2432</span> Difference between Riding a Bicycle on a Sidewalk or in the Street by Usual Traveling Means</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ai%20Fujii">Ai Fujii</a>, <a href="https://publications.waset.org/abstracts/search?q=Kan%20Shimazaki"> Kan Shimazaki</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Bicycle users must ride on the street according the law in Japan, but in practice, many bicycle users ride on the sidewalk. Drivers generally feel that bicycles riding in the street are in the way. In contrast, pedestrians generally feel that bicycles riding on the sidewalk are in the way. That seems to make sense. What, then, is the difference between riding a bicycle on the sidewalk or in the street by usual traveling means. We made 3D computer graphics models of pedestrians, a car, and a bicycle at an intersection. The bicycle was positioned to choose between advancing to the sidewalk or the street after a few seconds. We then made a 2D stimulus picture by changing the point of view of the 3DCG model pictures. Attitudes were surveyed using this 2D stimulus picture, and we compared attitudes between three groups, people traveling by car, on foot, or by bicycle. Here we report the survey result. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bicycle" title="bicycle">bicycle</a>, <a href="https://publications.waset.org/abstracts/search?q=sidewalk" title=" sidewalk"> sidewalk</a>, <a href="https://publications.waset.org/abstracts/search?q=pedestrians" title=" pedestrians"> pedestrians</a>, <a href="https://publications.waset.org/abstracts/search?q=driver" title=" driver"> driver</a>, <a href="https://publications.waset.org/abstracts/search?q=intersection" title=" intersection"> intersection</a>, <a href="https://publications.waset.org/abstracts/search?q=safety" title=" safety"> safety</a> </p> <a href="https://publications.waset.org/abstracts/75886/difference-between-riding-a-bicycle-on-a-sidewalk-or-in-the-street-by-usual-traveling-means" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/75886.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">180</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2431</span> GPU-Accelerated Triangle Mesh Simplification Using Parallel Vertex Removal</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Thomas%20Odaker">Thomas Odaker</a>, <a href="https://publications.waset.org/abstracts/search?q=Dieter%20Kranzlmueller"> Dieter Kranzlmueller</a>, <a href="https://publications.waset.org/abstracts/search?q=Jens%20Volkert"> Jens Volkert</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present an approach to triangle mesh simplification designed to be executed on the GPU. We use a quadric error metric to calculate an error value for each vertex of the mesh and order all vertices based on this value. This step is followed by the parallel removal of a number of vertices with the lowest calculated error values. To allow for the parallel removal of multiple vertices we use a set of per-vertex boundaries that prevent mesh foldovers even when simplification operations are performed on neighbouring vertices. We execute multiple iterations of the calculation of the vertex errors, ordering of the error values and removal of vertices until either a desired number of vertices remains in the mesh or a minimum error value is reached. This parallel approach is used to speed up the simplification process while maintaining mesh topology and avoiding foldovers at every step of the simplification. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20graphics" title="computer graphics">computer graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=half%20edge%20collapse" title=" half edge collapse"> half edge collapse</a>, <a href="https://publications.waset.org/abstracts/search?q=mesh%20simplification" title=" mesh simplification"> mesh simplification</a>, <a href="https://publications.waset.org/abstracts/search?q=precomputed%20simplification" title=" precomputed simplification"> precomputed simplification</a>, <a href="https://publications.waset.org/abstracts/search?q=topology%20preserving" title=" topology preserving"> topology preserving</a> </p> <a href="https://publications.waset.org/abstracts/36600/gpu-accelerated-triangle-mesh-simplification-using-parallel-vertex-removal" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36600.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">367</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2430</span> Computer Fraud from the Perspective of Iran&#039;s Law and International Documents</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Babak%20Pourghahramani">Babak Pourghahramani</a> </p> <p class="card-text"><strong>Abstract:</strong></p> One of the modern crimes against property and ownership in the cyber-space is the computer fraud. Despite being modern, the aforementioned crime has its roots in the principles of religious jurisprudence. In some cases, this crime is compatible with the traditional regulations and that is when the computer is considered as a crime commitment device and also some computer frauds that take place in the context of electronic exchanges are considered as crime based on the E-commerce Law (approved in 2003) but the aforementioned regulations are flawed and until recent years there was no comprehensive law in this regard; yet after some years the Computer Crime Act was approved in 2009/26/5 and partly solved the problem of legal vacuum. The present study intends to investigate the computer fraud according to Iran's Computer Crime Act and by taking into consideration the international documents. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=fraud" title="fraud">fraud</a>, <a href="https://publications.waset.org/abstracts/search?q=cyber%20fraud" title=" cyber fraud"> cyber fraud</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20fraud" title=" computer fraud"> computer fraud</a>, <a href="https://publications.waset.org/abstracts/search?q=classic%20fraud" title=" classic fraud"> classic fraud</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20crime" title=" computer crime"> computer crime</a> </p> <a href="https://publications.waset.org/abstracts/72041/computer-fraud-from-the-perspective-of-irans-law-and-international-documents" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72041.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">332</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2429</span> Metaphorical Perceptions of Middle School Students regarding Computer Games</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ismail%20Celik">Ismail Celik</a>, <a href="https://publications.waset.org/abstracts/search?q=Ismail%20Sahin"> Ismail Sahin</a>, <a href="https://publications.waset.org/abstracts/search?q=Fetah%20Eren"> Fetah Eren</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The computer, among the most important inventions of the twentieth century, has become an increasingly important component in our everyday lives. Computer games also have become increasingly popular among people day-by-day, owing to their features based on realistic virtual environments, audio and visual features, and the roles they offer players. In the present study, the metaphors students have for computer games are investigated, as well as an effort to fill the gap in the literature. Students were asked to complete the sentence—‘Computer game is like/similar to….because….’— to determine the middle school students’ metaphorical images of the concept for ‘computer game’. The metaphors created by the students were grouped in six categories, based on the source of the metaphor. These categories were ordered as ‘computer game as a means of entertainment’, ‘computer game as a beneficial means’, ‘computer game as a basic need’, ‘computer game as a source of evil’, ‘computer game as a means of withdrawal’, and ‘computer game as a source of addiction’, according to the number of metaphors they included. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20game" title="computer game">computer game</a>, <a href="https://publications.waset.org/abstracts/search?q=metaphor" title=" metaphor"> metaphor</a>, <a href="https://publications.waset.org/abstracts/search?q=middle%20school%20students" title=" middle school students"> middle school students</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20environments" title=" virtual environments"> virtual environments</a> </p> <a href="https://publications.waset.org/abstracts/11784/metaphorical-perceptions-of-middle-school-students-regarding-computer-games" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11784.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">535</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2428</span> Approximation of Geodesics on Meshes with Implementation in Rhinoceros Software</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Marian%20Sagat">Marian Sagat</a>, <a href="https://publications.waset.org/abstracts/search?q=Mariana%20Remesikova"> Mariana Remesikova</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In civil engineering, there is a problem how to industrially produce tensile membrane structures that are non-developable surfaces. Nondevelopable surfaces can only be developed with a certain error and we want to minimize this error. To that goal, the non-developable surfaces are cut into plates along to the geodesic curves. We propose a numerical algorithm for finding approximations of open geodesics on meshes and surfaces based on geodesic curvature flow. For practical reasons, it is important to automatize the choice of the time step. We propose a method for automatic setting of the time step based on the diagonal dominance criterion for the matrix of the linear system obtained by discretization of our partial differential equation model. Practical experiments show reliability of this method. Because approximation of the model is made by numerical method based on classic derivatives, it is necessary to solve obstacles which occur for meshes with sharp corners. We solve this problem for big family of meshes with sharp corners via special rotations which can be seen as partial unfolding of the mesh. In practical applications, it is required that the approximation of geodesic has its vertices only on the edges of the mesh. This problem is solved by a specially designed pointing tracking algorithm. We also partially solve the problem of finding geodesics on meshes with holes. We implemented the whole algorithm in Rhinoceros (commercial 3D computer graphics and computer-aided design software ). It is done by using C# language as C# assembly library for Grasshopper, which is plugin in Rhinoceros. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=geodesic" title="geodesic">geodesic</a>, <a href="https://publications.waset.org/abstracts/search?q=geodesic%20curvature%20flow" title=" geodesic curvature flow"> geodesic curvature flow</a>, <a href="https://publications.waset.org/abstracts/search?q=mesh" title=" mesh"> mesh</a>, <a href="https://publications.waset.org/abstracts/search?q=Rhinoceros%20software" title=" Rhinoceros software"> Rhinoceros software</a> </p> <a href="https://publications.waset.org/abstracts/93093/approximation-of-geodesics-on-meshes-with-implementation-in-rhinoceros-software" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/93093.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">151</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20graphics&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20graphics&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20graphics&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20graphics&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20graphics&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20graphics&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20graphics&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20graphics&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20graphics&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20graphics&amp;page=81">81</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20graphics&amp;page=82">82</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=computer%20graphics&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10