CINXE.COM

Search results for: navigation information visualization

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: navigation information visualization</title> <meta name="description" content="Search results for: navigation information visualization"> <meta name="keywords" content="navigation information visualization"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="navigation information visualization" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="navigation information visualization"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 11379</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: navigation information visualization</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11379</span> Research on the United Navigation Mechanism of Land, Sea and Air Targets under Multi-Sources Information Fusion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rui%20Liu">Rui Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Klaus%20Greve"> Klaus Greve</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The navigation information is a kind of dynamic geographic information, and the navigation information system is a kind of special geographic information system. At present, there are many researches on the application of centralized management and cross-integration application of basic geographic information. However, the idea of information integration and sharing is not deeply applied into the research of navigation information service. And the imperfection of navigation target coordination and navigation information sharing mechanism under certain navigation tasks has greatly affected the reliability and scientificity of navigation service such as path planning. Considering this, the project intends to study the multi-source information fusion and multi-objective united navigation information interaction mechanism: first of all, investigate the actual needs of navigation users in different areas, and establish the preliminary navigation information classification and importance level model; and then analyze the characteristics of the remote sensing and GIS vector data, and design the fusion algorithm from the aspect of improving the positioning accuracy and extracting the navigation environment data. At last, the project intends to analyze the feature of navigation information of the land, sea and air navigation targets, and design the united navigation data standard and navigation information sharing model under certain navigation tasks, and establish a test navigation system for united navigation simulation experiment. The aim of this study is to explore the theory of united navigation service and optimize the navigation information service model, which will lay the theory and technology foundation for the united navigation of land, sea and air targets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=information%20fusion" title="information fusion">information fusion</a>, <a href="https://publications.waset.org/abstracts/search?q=united%20navigation" title=" united navigation"> united navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=dynamic%20path%20planning" title=" dynamic path planning"> dynamic path planning</a>, <a href="https://publications.waset.org/abstracts/search?q=navigation%20information%20visualization" title=" navigation information visualization"> navigation information visualization</a> </p> <a href="https://publications.waset.org/abstracts/70612/research-on-the-united-navigation-mechanism-of-land-sea-and-air-targets-under-multi-sources-information-fusion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/70612.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">288</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11378</span> Design of an Air and Land Multi-Element Expression Pattern of Navigation Electronic Map for Ground Vehicles under United Navigation Mechanism</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rui%20Liu">Rui Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Pengyu%20Cui"> Pengyu Cui</a>, <a href="https://publications.waset.org/abstracts/search?q=Nan%20Jiang"> Nan Jiang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> At present, there is much research on the application of centralized management and cross-integration application of basic geographic information. However, the idea of information integration and sharing between land, sea, and air navigation targets is not deeply applied into the research of navigation information service, especially in the information expression. Targeting at this problem, the paper carries out works about the expression pattern of navigation electronic map for ground vehicles under air and land united navigation mechanism. At first, with the support from multi-source information fusion of GIS vector data, RS data, GPS data, etc., an air and land united information expression pattern is designed aiming at specific navigation task of emergency rescue in the earthquake. And then, the characteristics and specifications of the united expression of air and land navigation information under the constraints of map load are summarized and transferred into expression rules in the rule bank. At last, the related navigation experiment is implemented to evaluate the effect of the expression pattern. The experiment selects evaluation factors of the navigation task accomplishment time and the navigation error rate as the main index, and make comparisons with the traditional single information expression pattern. To sum up, the research improved the theory of navigation electronic map and laid a certain foundation for the design and realization of united navigation system in the aspect of real-time navigation information delivery. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=navigation%20electronic%20map" title="navigation electronic map">navigation electronic map</a>, <a href="https://publications.waset.org/abstracts/search?q=united%20navigation" title=" united navigation"> united navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-element%20expression%20pattern" title=" multi-element expression pattern"> multi-element expression pattern</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-source%20information%20fusion" title=" multi-source information fusion"> multi-source information fusion</a> </p> <a href="https://publications.waset.org/abstracts/79171/design-of-an-air-and-land-multi-element-expression-pattern-of-navigation-electronic-map-for-ground-vehicles-under-united-navigation-mechanism" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/79171.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">199</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11377</span> Digital Twin Platform for BDS-3 Satellite Navigation Using Digital Twin Intelligent Visualization Technology</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rundong%20Li">Rundong Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Peng%20Wu"> Peng Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Junfeng%20Zhang"> Junfeng Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhipeng%20Ren"> Zhipeng Ren</a>, <a href="https://publications.waset.org/abstracts/search?q=Chen%20Yang"> Chen Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Jiahui%20Gan"> Jiahui Gan</a>, <a href="https://publications.waset.org/abstracts/search?q=Lu%20Feng"> Lu Feng</a>, <a href="https://publications.waset.org/abstracts/search?q=Haibo%20Tong"> Haibo Tong</a>, <a href="https://publications.waset.org/abstracts/search?q=Xuemei%20Xiao"> Xuemei Xiao</a>, <a href="https://publications.waset.org/abstracts/search?q=Yuying%20Chen"> Yuying Chen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The research of Beidou-3 satellite navigation is on the rise, but in actual work, it is inevitable that satellite data is insecure, research and development is inefficient, and there is no ability to deal with failures in advance. Digital twin technology has obvious advantages in the simulation of life cycle models of aerospace satellite navigation products. In order to meet the increasing demand, this paper builds a Beidou-3 satellite navigation digital twin platform (BDSDTP). The basic establishment of BDSDTP was completed by establishing a digital twin double, Beidou-3 comprehensive digital twin design, predictive maintenance (PdM) mathematical model, and visual interaction design. Finally, this paper provides a time application case of the platform, which provides a reference for the application of BDSDTP in various fields of navigation and provides obvious help for extending the full cycle life of Beidou-3 satellite navigation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=BDS-3" title="BDS-3">BDS-3</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20twin" title=" digital twin"> digital twin</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=PdM" title=" PdM"> PdM</a> </p> <a href="https://publications.waset.org/abstracts/167908/digital-twin-platform-for-bds-3-satellite-navigation-using-digital-twin-intelligent-visualization-technology" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/167908.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">142</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11376</span> Exploring the Landscape of Information Visualization through a Mark Lombardi Lens</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alon%20Friedman">Alon Friedman</a>, <a href="https://publications.waset.org/abstracts/search?q=Antonio%20Sanchez%20Chinchon"> Antonio Sanchez Chinchon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This bibliometric study takes an artistic and storytelling approach to explore the term ”information visualization.” Analyzing over 1008 titles collected from databases that specialize in data visualization research, we examine the titles of these publications to report on the characteristics and development trends in the field. Employing a qualitative methodology, we delve into the titles of these publications, extracting leading terms and exploring the cooccurrence of these terms to gain deeper insights. By systematically analyzing the leading terms and their relationships within the titles, we shed light on the prevailing themes that shape the landscape of ”information visualization” by employing the artist Mark Lombardi’s techniques to visualize our findings. By doing so, this study provides valuable insights into bibliometrics visualization while also opening new avenues for leveraging art and storytelling to enhance data representation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=bibliometrics%20analysis" title="bibliometrics analysis">bibliometrics analysis</a>, <a href="https://publications.waset.org/abstracts/search?q=Mark%20Lombardi%20design" title=" Mark Lombardi design"> Mark Lombardi design</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20visualization" title=" information visualization"> information visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=qualitative%20methodology" title=" qualitative methodology"> qualitative methodology</a> </p> <a href="https://publications.waset.org/abstracts/171915/exploring-the-landscape-of-information-visualization-through-a-mark-lombardi-lens" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171915.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11375</span> Real-Time Visualization Using GPU-Accelerated Filtering of LiDAR Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sa%C5%A1o%20Pe%C4%8Dnik">Sašo Pečnik</a>, <a href="https://publications.waset.org/abstracts/search?q=Borut%20%C5%BDalik"> Borut Žalik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a real-time visualization technique and filtering of classified LiDAR point clouds. The visualization is capable of displaying filtered information organized in layers by the classification attribute saved within LiDAR data sets. We explain the used data structure and data management, which enables real-time presentation of layered LiDAR data. Real-time visualization is achieved with LOD optimization based on the distance from the observer without loss of quality. The filtering process is done in two steps and is entirely executed on the GPU and implemented using programmable shaders. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=filtering" title="filtering">filtering</a>, <a href="https://publications.waset.org/abstracts/search?q=graphics" title=" graphics"> graphics</a>, <a href="https://publications.waset.org/abstracts/search?q=level-of-details" title=" level-of-details"> level-of-details</a>, <a href="https://publications.waset.org/abstracts/search?q=LiDAR" title=" LiDAR"> LiDAR</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20visualization" title=" real-time visualization"> real-time visualization</a> </p> <a href="https://publications.waset.org/abstracts/16857/real-time-visualization-using-gpu-accelerated-filtering-of-lidar-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/16857.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">308</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11374</span> Mechanisms Underlying Comprehension of Visualized Personal Health Information: An Eye Tracking Study</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Da%20Tao">Da Tao</a>, <a href="https://publications.waset.org/abstracts/search?q=Mingfu%20Qin"> Mingfu Qin</a>, <a href="https://publications.waset.org/abstracts/search?q=Wenkai%20Li"> Wenkai Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Tieyan%20Wang"> Tieyan Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> While the use of electronic personal health portals has gained increasing popularity in the healthcare industry, users usually experience difficulty in comprehending and correctly responding to personal health information, partly due to inappropriate or poor presentation of the information. The way personal health information is visualized may affect how users perceive and assess their personal health information. This study was conducted to examine the effects of information visualization format and visualization mode on the comprehension and perceptions of personal health information among personal health information users with eye tracking techniques. A two-factor within-subjects experimental design was employed, where participants were instructed to complete a series of personal health information comprehension tasks under varied types of visualization mode (i.e., whether the information visualization is static or dynamic) and three visualization formats (i.e., bar graph, instrument-like graph, and text-only format). Data on a set of measures, including comprehension performance, perceptions, and eye movement indicators, were collected during the task completion in the experiment. Repeated measure analysis of variance analyses (RM-ANOVAs) was used for data analysis. The results showed that while the visualization format yielded no effects on comprehension performance, it significantly affected users’ perceptions (such as perceived ease of use and satisfaction). The two graphic visualizations yielded significantly higher favorable scores on subjective evaluations than that of the text format. While visualization mode showed no effects on users’ perception measures, it significantly affected users' comprehension performance in that dynamic visualization significantly reduced users' information search time. Both visualization format and visualization mode had significant main effects on eye movement behaviors, and their interaction effects were also significant. While the bar graph format and text format had similar time to first fixation across dynamic and static visualizations, instrument-like graph format had a larger time to first fixation for dynamic visualization than for static visualization. The two graphic visualization formats yielded shorter total fixation duration compared with the text-only format, indicating their ability to improve information comprehension efficiency. The results suggest that dynamic visualization can improve efficiency in comprehending important health information, and graphic visualization formats were favored more by users. The findings are helpful in the underlying comprehension mechanism of visualized personal health information and provide important implications for optimal design and visualization of personal health information. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=eye%20tracking" title="eye tracking">eye tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20comprehension" title=" information comprehension"> information comprehension</a>, <a href="https://publications.waset.org/abstracts/search?q=personal%20health%20information" title=" personal health information"> personal health information</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a> </p> <a href="https://publications.waset.org/abstracts/166965/mechanisms-underlying-comprehension-of-visualized-personal-health-information-an-eye-tracking-study" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166965.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">109</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11373</span> Next-Viz: A Literature Review and Web-Based Visualization Tool Proposal</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Railly%20Hugo">Railly Hugo</a>, <a href="https://publications.waset.org/abstracts/search?q=Igor%20Aguilar-Alonso"> Igor Aguilar-Alonso</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Software visualization is a powerful tool for understanding complex software systems. However, current visualization tools often lack features or are difficult to use, limiting their effectiveness. In this paper, we present next-viz, a proposed web-based visualization tool that addresses these challenges. We provide a literature review of existing software visualization techniques and tools and describe the architecture of next-viz in detail. Our proposed tool incorporates state-of-the-art visualization techniques and is designed to be user-friendly and intuitive. We believe next-viz has the potential to advance the field of software visualization significantly. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=software%20visualization" title="software visualization">software visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=literature%20review" title=" literature review"> literature review</a>, <a href="https://publications.waset.org/abstracts/search?q=tool%20proposal" title=" tool proposal"> tool proposal</a>, <a href="https://publications.waset.org/abstracts/search?q=next-viz" title=" next-viz"> next-viz</a>, <a href="https://publications.waset.org/abstracts/search?q=web-based" title=" web-based"> web-based</a>, <a href="https://publications.waset.org/abstracts/search?q=architecture" title=" architecture"> architecture</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization%20techniques" title=" visualization techniques"> visualization techniques</a>, <a href="https://publications.waset.org/abstracts/search?q=user-friendly" title=" user-friendly"> user-friendly</a>, <a href="https://publications.waset.org/abstracts/search?q=intuitive" title=" intuitive"> intuitive</a> </p> <a href="https://publications.waset.org/abstracts/160508/next-viz-a-literature-review-and-web-based-visualization-tool-proposal" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/160508.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">82</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11372</span> Screen Method of Distributed Cooperative Navigation Factors for Unmanned Aerial Vehicle Swarm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Can%20Zhang">Can Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Qun%20Li"> Qun Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Yonglin%20Lei"> Yonglin Lei</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhi%20Zhu"> Zhi Zhu</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong%20Guo"> Dong Guo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Aiming at the problem of factor screen in distributed collaborative navigation of dense UAV swarm, an efficient distributed collaborative navigation factor screen method is proposed. The method considered the balance between computing load and positioning accuracy. The proposed algorithm utilized the factor graph model to implement a distributed collaborative navigation algorithm. The GNSS information of the UAV itself and the ranging information between the UAVs are used as the positioning factors. In this distributed scheme, a local factor graph is established for each UAV. The positioning factors of nodes with good geometric position distribution and small variance are selected to participate in the navigation calculation. To demonstrate and verify the proposed methods, the simulation and experiments in different scenarios are performed in this research. Simulation results show that the proposed scheme achieves a good balance between the computing load and positioning accuracy in the distributed cooperative navigation calculation of UAV swarm. This proposed algorithm has important theoretical and practical value for both industry and academic areas. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=screen%20method" title="screen method">screen method</a>, <a href="https://publications.waset.org/abstracts/search?q=cooperative%20positioning%20system" title=" cooperative positioning system"> cooperative positioning system</a>, <a href="https://publications.waset.org/abstracts/search?q=UAV%20swarm" title=" UAV swarm"> UAV swarm</a>, <a href="https://publications.waset.org/abstracts/search?q=factor%20graph" title=" factor graph"> factor graph</a>, <a href="https://publications.waset.org/abstracts/search?q=cooperative%20navigation" title=" cooperative navigation"> cooperative navigation</a> </p> <a href="https://publications.waset.org/abstracts/166690/screen-method-of-distributed-cooperative-navigation-factors-for-unmanned-aerial-vehicle-swarm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166690.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">79</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11371</span> Virtual 3D Environments for Image-Based Navigation Algorithms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=V.%20B.%20Bastos">V. B. Bastos</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20P.%20Lima"> M. P. Lima</a>, <a href="https://publications.waset.org/abstracts/search?q=P.%20R.%20G.%20Kurka"> P. R. G. Kurka</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper applies to the creation of virtual 3D environments for the study and development of mobile robot image based navigation algorithms and techniques, which need to operate robustly and efficiently. The test of these algorithms can be performed in a physical way, from conducting experiments on a prototype, or by numerical simulations. Current simulation platforms for robotic applications do not have flexible and updated models for image rendering, being unable to reproduce complex light effects and materials. Thus, it is necessary to create a test platform that integrates sophisticated simulated applications of real environments for navigation, with data and image processing. This work proposes the development of a high-level platform for building 3D model&rsquo;s environments and the test of image-based navigation algorithms for mobile robots. Techniques were used for applying texture and lighting effects in order to accurately represent the generation of rendered images regarding the real world version. The application will integrate image processing scripts, trajectory control, dynamic modeling and simulation techniques for physics representation and picture rendering with the open source 3D creation suite - Blender. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=simulation" title="simulation">simulation</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20navigation" title=" visual navigation"> visual navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20robot" title=" mobile robot"> mobile robot</a>, <a href="https://publications.waset.org/abstracts/search?q=data%20visualization" title=" data visualization"> data visualization</a> </p> <a href="https://publications.waset.org/abstracts/61824/virtual-3d-environments-for-image-based-navigation-algorithms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/61824.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">255</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11370</span> Flow Visualization in Biological Complex Geometries for Personalized Medicine</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Carlos%20Escobar-del%20Pozo">Carlos Escobar-del Pozo</a>, <a href="https://publications.waset.org/abstracts/search?q=C%C3%A9sar%20Ahumada-Monroy"> César Ahumada-Monroy</a>, <a href="https://publications.waset.org/abstracts/search?q=Azael%20Garc%C3%ADa-Rebolledo"> Azael García-Rebolledo</a>, <a href="https://publications.waset.org/abstracts/search?q=Alberto%20Brambila-Sol%C3%B3rzano"> Alberto Brambila-Solórzano</a>, <a href="https://publications.waset.org/abstracts/search?q=Gregorio%20Mart%C3%ADnez-S%C3%A1nchez"> Gregorio Martínez-Sánchez</a>, <a href="https://publications.waset.org/abstracts/search?q=Luis%20Ortiz-Rinc%C3%B3n"> Luis Ortiz-Rincón</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Numerical simulations of flow in complex biological structures have gained considerable attention in the last years. However, the major issue is the validation of the results. The present work shows a Particle Image Velocimetry PIV flow visualization technique in complex biological structures, particularly in intracranial aneurysms. A methodology to reconstruct and generate a transparent model has been developed, as well as visualization and particle tracking techniques. The generated transparent models allow visualizing the flow patterns with a regular camera using the visualization techniques. The final goal is to use visualization as a tool to provide more information on the treatment and surgery decisions in aneurysms. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aneurysms" title="aneurysms">aneurysms</a>, <a href="https://publications.waset.org/abstracts/search?q=PIV" title=" PIV"> PIV</a>, <a href="https://publications.waset.org/abstracts/search?q=flow%20visualization" title=" flow visualization"> flow visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20tracking" title=" particle tracking"> particle tracking</a> </p> <a href="https://publications.waset.org/abstracts/165909/flow-visualization-in-biological-complex-geometries-for-personalized-medicine" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165909.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">90</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11369</span> Optical Flow Localisation and Appearance Mapping (OFLAAM) for Long-Term Navigation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daniel%20Pastor">Daniel Pastor</a>, <a href="https://publications.waset.org/abstracts/search?q=Hyo-Sang%20Shin"> Hyo-Sang Shin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a novel method to use optical flow navigation for long-term navigation. Unlike standard SLAM approaches for augmented reality, OFLAAM is designed for Micro Air Vehicles (MAV). It uses an optical flow camera pointing downwards, an IMU and a monocular camera pointing frontwards. That configuration avoids the expensive mapping and tracking of the 3D features. It only maps these features in a vocabulary list by a localization module to tackle the loss of the navigation estimation. That module, based on the well-established algorithm DBoW2, will be also used to close the loop and allow long-term navigation in confined areas. That combination of high-speed optical flow navigation with a low rate localization algorithm allows fully autonomous navigation for MAV, at the same time it reduces the overall computational load. This framework is implemented in ROS (Robot Operating System) and tested attached to a laptop. A representative scenarios is used to analyse the performance of the system. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=vision" title="vision">vision</a>, <a href="https://publications.waset.org/abstracts/search?q=UAV" title=" UAV"> UAV</a>, <a href="https://publications.waset.org/abstracts/search?q=navigation" title=" navigation"> navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=SLAM" title=" SLAM"> SLAM</a> </p> <a href="https://publications.waset.org/abstracts/20509/optical-flow-localisation-and-appearance-mapping-oflaam-for-long-term-navigation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/20509.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">606</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11368</span> Development of Modular Shortest Path Navigation System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nalinee%20Sophatsathit">Nalinee Sophatsathit</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a variation of navigation systems which tallies every node along the shortest path from start to destination nodes. The underlying technique rests on the well-established Dijkstra Algorithm. The ultimate goal is to serve as a user navigation guide that furnishes stop over cost of every node along this shortest path, whereby users can decide whether or not to visit any specific nodes. The output is an implementable module that can be further refined to run on the Internet and smartphone technology. This will benefit large organizations having physical installations spreaded over wide area such as hospitals, universities, etc. The savings on service personnel, let alone lost time and unproductive work, are attributive to innovative navigation system management. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=navigation%20systems" title="navigation systems">navigation systems</a>, <a href="https://publications.waset.org/abstracts/search?q=shortest%20path" title=" shortest path"> shortest path</a>, <a href="https://publications.waset.org/abstracts/search?q=smartphone%20technology" title=" smartphone technology"> smartphone technology</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20navigation%20guide" title=" user navigation guide"> user navigation guide</a> </p> <a href="https://publications.waset.org/abstracts/12201/development-of-modular-shortest-path-navigation-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/12201.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">338</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11367</span> Examination of Readiness of Teachers in the Use of Information-Communication Technologies in the Classroom</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nikolina%20Ribari%C4%87">Nikolina Ribarić</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper compares the readiness of chemistry teachers to use information and communication technologies in chemistry in 2018. and 2021. A survey conducted in 2018 on a sample of teachers showed that most teachers occasionally use visualization and digitization tools in chemistry teaching (65%) but feel that they are not educated enough to use them (56%). Also, most teachers do not have adequate equipment in their schools and are not able to use ICT in teaching or digital tools for visualization and digitization of content (44%). None of the teachers find the use of digitization and visualization tools useless. Furthermore, a survey conducted in 2021 shows that most teachers occasionally use visualization and digitization tools in chemistry teaching (83%). Also, the research shows that some teachers still do not have adequate equipment in their schools and are not able to use ICT in chemistry teaching or digital tools for visualization and digitization of content (14%). Advances in the use of ICT in chemistry teaching are linked to pandemic conditions and the obligation to conduct online teaching. The share of 14% of teachers who still do not have adequate equipment to use digital tools in teaching is worrying. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=chemistry" title="chemistry">chemistry</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20content" title=" digital content"> digital content</a>, <a href="https://publications.waset.org/abstracts/search?q=e-learning" title=" e-learning"> e-learning</a>, <a href="https://publications.waset.org/abstracts/search?q=ICT" title=" ICT"> ICT</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a> </p> <a href="https://publications.waset.org/abstracts/144099/examination-of-readiness-of-teachers-in-the-use-of-information-communication-technologies-in-the-classroom" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/144099.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11366</span> Transforming Healthcare with Immersive Visualization: An Analysis of Virtual and Holographic Health Information Platforms</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hossein%20Miri">Hossein Miri</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhou%20YongQi"> Zhou YongQi</a>, <a href="https://publications.waset.org/abstracts/search?q=Chan%20Bormei-Suy"> Chan Bormei-Suy</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The development of advanced technologies and innovative solutions has opened up exciting new possibilities for revolutionizing healthcare systems. One such emerging concept is the use of virtual and holographic health information platforms that aim to provide interactive and personalized medical information to users. This paper provides a review of notable virtual and holographic health information platforms. It begins by highlighting the need for information visualization and 3D representation in healthcare. It then proceeds to provide background knowledge on information visualization and historical developments in 3D visualization technology. Additional domain knowledge concerning holography, holographic computing, and mixed reality is then introduced, followed by highlighting some of their common applications and use cases. After setting the scene and defining the context, the need and importance of virtual and holographic visualization in medicine are discussed. Subsequently, some of the current research areas and applications of digital holography and holographic technology are explored, alongside the importance and role of virtual and holographic visualization in genetics and genomics. An analysis of the key principles and concepts underlying virtual and holographic health information systems is presented, as well as their potential implications for healthcare are pointed out. The paper concludes by examining the most notable existing mixed-reality applications and systems that help doctors visualize diagnostic and genetic data and assist in patient education and communication. This paper is intended to be a valuable resource for researchers, developers, and healthcare professionals who are interested in the use of virtual and holographic technologies to improve healthcare. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=virtual" title="virtual">virtual</a>, <a href="https://publications.waset.org/abstracts/search?q=holographic" title=" holographic"> holographic</a>, <a href="https://publications.waset.org/abstracts/search?q=health%20information%20platform" title=" health information platform"> health information platform</a>, <a href="https://publications.waset.org/abstracts/search?q=personalized%20interactive%20medical%20information" title=" personalized interactive medical information"> personalized interactive medical information</a> </p> <a href="https://publications.waset.org/abstracts/171144/transforming-healthcare-with-immersive-visualization-an-analysis-of-virtual-and-holographic-health-information-platforms" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/171144.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">89</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11365</span> Hybrid Control Mode Based on Multi-Sensor Information by Fuzzy Approach for Navigation Task of Autonomous Mobile Robot</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jonqlan%20Lin">Jonqlan Lin</a>, <a href="https://publications.waset.org/abstracts/search?q=C.%20Y.%20Tasi"> C. Y. Tasi</a>, <a href="https://publications.waset.org/abstracts/search?q=K.%20H.%20Lin"> K. H. Lin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper addresses the issue of the autonomous mobile robot (AMR) navigation task based on the hybrid control modes. The novel hybrid control mode, based on multi-sensors information by using the fuzzy approach, has been presented in this research. The system operates in real time, is robust, enables the robot to operate with imprecise knowledge, and takes into account the physical limitations of the environment in which the robot moves, obtaining satisfactory responses for a large number of different situations. An experiment is simulated and carried out with a pioneer mobile robot. From the experimental results, the effectiveness and usefulness of the proposed AMR obstacle avoidance and navigation scheme are confirmed. The experimental results show the feasibility, and the control system has improved the navigation accuracy. The implementation of the controller is robust, has a low execution time, and allows an easy design and tuning of the fuzzy knowledge base. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autonomous%20mobile%20robot" title="autonomous mobile robot">autonomous mobile robot</a>, <a href="https://publications.waset.org/abstracts/search?q=obstacle%20avoidance" title=" obstacle avoidance"> obstacle avoidance</a>, <a href="https://publications.waset.org/abstracts/search?q=MEMS" title=" MEMS"> MEMS</a>, <a href="https://publications.waset.org/abstracts/search?q=hybrid%20control%20mode" title=" hybrid control mode"> hybrid control mode</a>, <a href="https://publications.waset.org/abstracts/search?q=navigation%20control" title=" navigation control"> navigation control</a> </p> <a href="https://publications.waset.org/abstracts/26893/hybrid-control-mode-based-on-multi-sensor-information-by-fuzzy-approach-for-navigation-task-of-autonomous-mobile-robot" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/26893.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">466</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11364</span> Construction Information Visualization System Using nD CAD Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hyeon-seoung%20Kim">Hyeon-seoung Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=Sang-mi%20Park"> Sang-mi Park</a>, <a href="https://publications.waset.org/abstracts/search?q=Sun-ju%20Han"> Sun-ju Han</a>, <a href="https://publications.waset.org/abstracts/search?q=Leen-seok%20Kang"> Leen-seok Kang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The visualization technology of construction information using 3D and nD modeling can satisfy the visualization needs of each construction project participant. The nD CAD system is a tool that the construction information, such as construction schedule, cost and resource utilization, are simulated by 4D, 5D and 6D object formats based on 3D object. This study developed a methodology and simulation engine for nD CAD system for construction project management. It has improved functions such as built-in schedule generation, cost simulation of changed budget and built-in resource allocation comparing with the current systems. To develop an integrated nD CAD system, this study attempts an integrated method to link 5D and 6D objects based on 4D object. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=building%20information%20modeling" title="building information modeling">building information modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20simulation" title=" visual simulation"> visual simulation</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20object" title=" 3D object"> 3D object</a>, <a href="https://publications.waset.org/abstracts/search?q=nD%20CAD%20augmented%20reality" title=" nD CAD augmented reality"> nD CAD augmented reality</a> </p> <a href="https://publications.waset.org/abstracts/43696/construction-information-visualization-system-using-nd-cad-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/43696.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">312</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11363</span> A Short-Baseline Dual-Antenna BDS/MEMS-IMU Integrated Navigation System</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tijing%20Cai">Tijing Cai</a>, <a href="https://publications.waset.org/abstracts/search?q=Qimeng%20Xu"> Qimeng Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=Daijin%20Zhou"> Daijin Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper puts forward a short-baseline dual-antenna BDS/MEMS-IMU integrated navigation, constructs the carrier phase double difference model of BDS (BeiDou Navigation Satellite System), and presents a 2-position initial orientation method on BDS. The Extended Kalman-filter has been introduced for the integrated navigation system. The differences between MEMS-IMU and BDS position, velocity and carrier phase indications are used as measurements. To show the performance of the short-baseline dual-antenna BDS/MEMS-IMU integrated navigation system, the experiment results show that the position error is less than 1m, the pitch angle error and roll angle error are less than 0.1°, and the heading angle error is about 1°. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=MEMS-IMU%20%28Micro-Electro-Mechanical%20System%20Inertial%20Measurement%20Unit%29" title="MEMS-IMU (Micro-Electro-Mechanical System Inertial Measurement Unit)">MEMS-IMU (Micro-Electro-Mechanical System Inertial Measurement Unit)</a>, <a href="https://publications.waset.org/abstracts/search?q=BDS%20%28BeiDou%20Navigation%20Satellite%20System%29" title=" BDS (BeiDou Navigation Satellite System)"> BDS (BeiDou Navigation Satellite System)</a>, <a href="https://publications.waset.org/abstracts/search?q=dual-antenna" title=" dual-antenna"> dual-antenna</a>, <a href="https://publications.waset.org/abstracts/search?q=integrated%20navigation" title=" integrated navigation"> integrated navigation</a> </p> <a href="https://publications.waset.org/abstracts/97626/a-short-baseline-dual-antenna-bdsmems-imu-integrated-navigation-system" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/97626.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">193</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11362</span> Effects of Structure on Density-Induced Flow in Coastal and Estuarine Navigation Channel</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Shuo%20Huang">Shuo Huang</a>, <a href="https://publications.waset.org/abstracts/search?q=Huomiao%20Guo"> Huomiao Guo</a>, <a href="https://publications.waset.org/abstracts/search?q=Wenrui%20Huang"> Wenrui Huang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In navigation channels located in coasts and estuaries as the waterways connecting coastal water to ports or harbors, density-induced flow often exist due to the density-gradient or gravity gradient as the results of mixing between fresh water from coastal rivers and saline water in the coasts. The density-induced flow often carries sediment transport into navigation channels and causes sediment depositions in the channels. As a result, expensive dredging may need to maintain the water depth required for navigation. In our study, we conduct a series of experiments to investigate the characteristics of density-induced flow in the estuarine navigation channels under different density gradients. Empirical equations between density flow and salinity gradient were derived. Effects of coastal structures for regulating navigation channel on density-induced flow have also been investigated. Results will be very helpful for improving the understanding of the characteristics of density-induced flow in estuarine navigation channels. The results will also provide technical support for cost-effective waterway regulation and management to maintain coastal and estuarine navigation channels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=density%20flow" title="density flow">density flow</a>, <a href="https://publications.waset.org/abstracts/search?q=estuarine" title=" estuarine"> estuarine</a>, <a href="https://publications.waset.org/abstracts/search?q=navigation%20channel" title=" navigation channel"> navigation channel</a>, <a href="https://publications.waset.org/abstracts/search?q=structure" title=" structure"> structure</a> </p> <a href="https://publications.waset.org/abstracts/119059/effects-of-structure-on-density-induced-flow-in-coastal-and-estuarine-navigation-channel" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/119059.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">258</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11361</span> Dissimilarity-Based Coloring for Symbolic and Multivariate Data Visualization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=K.%20Umbleja">K. Umbleja</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20Ichino"> M. Ichino</a>, <a href="https://publications.waset.org/abstracts/search?q=H.%20Yaguchi"> H. Yaguchi</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we propose a coloring method for multivariate data visualization by using parallel coordinates based on dissimilarity and tree structure information gathered during hierarchical clustering. The proposed method is an extension for proximity-based coloring that suffers from a few undesired side effects if hierarchical tree structure is not balanced tree. We describe the algorithm by assigning colors based on dissimilarity information, show the application of proposed method on three commonly used datasets, and compare the results with proximity-based coloring. We found our proposed method to be especially beneficial for symbolic data visualization where many individual objects have already been aggregated into a single symbolic object. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=data%20visualization" title="data visualization">data visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=dissimilarity-based%20coloring" title=" dissimilarity-based coloring"> dissimilarity-based coloring</a>, <a href="https://publications.waset.org/abstracts/search?q=proximity-based%20coloring" title=" proximity-based coloring"> proximity-based coloring</a>, <a href="https://publications.waset.org/abstracts/search?q=symbolic%20data" title=" symbolic data"> symbolic data</a> </p> <a href="https://publications.waset.org/abstracts/92191/dissimilarity-based-coloring-for-symbolic-and-multivariate-data-visualization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/92191.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">170</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11360</span> Integrated Navigation System Using Simplified Kalman Filter Algorithm </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Othman%20Maklouf">Othman Maklouf</a>, <a href="https://publications.waset.org/abstracts/search?q=Abdunnaser%20Tresh"> Abdunnaser Tresh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> GPS and inertial navigation system (INS) have complementary qualities that make them ideal use for sensor fusion. The limitations of GPS include occasional high noise content, outages when satellite signals are blocked, interference and low bandwidth. The strengths of GPS include its long-term stability and its capacity to function as a stand-alone navigation system. In contrast, INS is not subject to interference or outages, have high bandwidth and good short-term noise characteristics, but have long-term drift errors and require external information for initialization. A combined system of GPS and INS subsystems can exhibit the robustness, higher bandwidth and better noise characteristics of the inertial system with the long-term stability of GPS. The most common estimation algorithm used in integrated INS/GPS is the Kalman Filter (KF). KF is able to take advantages of these characteristics to provide a common integrated navigation implementation with performance superior to that of either subsystem (GPS or INS). This paper presents a simplified KF algorithm for land vehicle navigation application. In this integration scheme, the GPS derived positions and velocities are used as the update measurements for the INS derived PVA. The KF error state vector in this case includes the navigation parameters as well as the accelerometer and gyroscope error states. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=GPS" title="GPS">GPS</a>, <a href="https://publications.waset.org/abstracts/search?q=INS" title=" INS"> INS</a>, <a href="https://publications.waset.org/abstracts/search?q=Kalman%20filter" title=" Kalman filter"> Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=inertial%20navigation%20system" title=" inertial navigation system"> inertial navigation system</a> </p> <a href="https://publications.waset.org/abstracts/11049/integrated-navigation-system-using-simplified-kalman-filter-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/11049.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">471</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11359</span> Visual Text Analytics Technologies for Real-Time Big Data: Chronological Evolution and Issues</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Siti%20Azrina%20B.%20A.%20Aziz">Siti Azrina B. A. Aziz</a>, <a href="https://publications.waset.org/abstracts/search?q=Siti%20Hafizah%20A.%20Hamid"> Siti Hafizah A. Hamid</a> </p> <p class="card-text"><strong>Abstract:</strong></p> New approaches to analyze and visualize data stream in real-time basis is important in making a prompt decision by the decision maker. Financial market trading and surveillance, large-scale emergency response and crowd control are some example scenarios that require real-time analytic and data visualization. This situation has led to the development of techniques and tools that support humans in analyzing the source data. With the emergence of Big Data and social media, new techniques and tools are required in order to process the streaming data. Today, ranges of tools which implement some of these functionalities are available. In this paper, we present chronological evolution evaluation of technologies for supporting of real-time analytic and visualization of the data stream. Based on the past research papers published from 2002 to 2014, we gathered the general information, main techniques, challenges and open issues. The techniques for streaming text visualization are identified based on Text Visualization Browser in chronological order. This paper aims to review the evolution of streaming text visualization techniques and tools, as well as to discuss the problems and challenges for each of identified tools. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=information%20visualization" title="information visualization">information visualization</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20analytics" title=" visual analytics"> visual analytics</a>, <a href="https://publications.waset.org/abstracts/search?q=text%20mining" title=" text mining"> text mining</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20text%20analytics%20tools" title=" visual text analytics tools"> visual text analytics tools</a>, <a href="https://publications.waset.org/abstracts/search?q=big%20data%20visualization" title=" big data visualization"> big data visualization</a> </p> <a href="https://publications.waset.org/abstracts/36745/visual-text-analytics-technologies-for-real-time-big-data-chronological-evolution-and-issues" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/36745.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">399</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11358</span> Accuracy of Autonomy Navigation of Unmanned Aircraft Systems through Imagery</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Sidney%20A.%20Lima">Sidney A. Lima</a>, <a href="https://publications.waset.org/abstracts/search?q=Hermann%20J.%20H.%20Kux"> Hermann J. H. Kux</a>, <a href="https://publications.waset.org/abstracts/search?q=Elcio%20H.%20Shiguemori"> Elcio H. Shiguemori</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The Unmanned Aircraft Systems (UAS) usually navigate through the Global Navigation Satellite System (GNSS) associated with an Inertial Navigation System (INS). However, GNSS can have its accuracy degraded at any time or even turn off the signal of GNSS. In addition, there is the possibility of malicious interferences, known as jamming. Therefore, the image navigation system can solve the autonomy problem, because if the GNSS is disabled or degraded, the image navigation system would continue to provide coordinate information for the INS, allowing the autonomy of the system. This work aims to evaluate the accuracy of the positioning though photogrammetry concepts. The methodology uses orthophotos and Digital Surface Models (DSM) as a reference to represent the object space and photograph obtained during the flight to represent the image space. For the calculation of the coordinates of the perspective center and camera attitudes, it is necessary to know the coordinates of homologous points in the object space (orthophoto coordinates and DSM altitude) and image space (column and line of the photograph). So if it is possible to automatically identify in real time the homologous points the coordinates and attitudes can be calculated whit their respective accuracies. With the methodology applied in this work, it is possible to verify maximum errors in the order of 0.5 m in the positioning and 0.6&ordm; in the attitude of the camera, so the navigation through the image can reach values equal to or higher than the GNSS receivers without differential correction. Therefore, navigating through the image is a good alternative to enable autonomous navigation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=autonomy" title="autonomy">autonomy</a>, <a href="https://publications.waset.org/abstracts/search?q=navigation" title=" navigation"> navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=security" title=" security"> security</a>, <a href="https://publications.waset.org/abstracts/search?q=photogrammetry" title=" photogrammetry"> photogrammetry</a>, <a href="https://publications.waset.org/abstracts/search?q=remote%20sensing" title=" remote sensing"> remote sensing</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20resection" title=" spatial resection"> spatial resection</a>, <a href="https://publications.waset.org/abstracts/search?q=UAS" title=" UAS"> UAS</a> </p> <a href="https://publications.waset.org/abstracts/91629/accuracy-of-autonomy-navigation-of-unmanned-aircraft-systems-through-imagery" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/91629.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">191</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11357</span> Enhanced Iceberg Information Dissemination for Public and Autonomous Maritime Use</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ronald%20Mraz">Ronald Mraz</a>, <a href="https://publications.waset.org/abstracts/search?q=Gary%20C.%20Kessler"> Gary C. Kessler</a>, <a href="https://publications.waset.org/abstracts/search?q=Ethan%20Gold"> Ethan Gold</a>, <a href="https://publications.waset.org/abstracts/search?q=John%20G.%20Cline"> John G. Cline</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The International Ice Patrol (IIP) continually monitors iceberg activity in the North Atlantic by direct observation using ships, aircraft, and satellite imagery. Daily reports detailing navigational boundaries of icebergs have significantly reduced the risk of iceberg contact. What is currently lacking is formatting this data for automatic transmission and display of iceberg navigational boundaries in commercial navigation equipment. This paper describes the methodology and implementation of a system to format iceberg limit information for dissemination through existing radio network communications. This information will then automatically display on commercial navigation equipment. Additionally, this information is reformatted for Google Earth rendering of iceberg track line limits. Having iceberg limit information automatically available in standard navigation equipment will help support full autonomous operation of sailing vessels. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=iceberg" title="iceberg">iceberg</a>, <a href="https://publications.waset.org/abstracts/search?q=iceberg%20risk" title=" iceberg risk"> iceberg risk</a>, <a href="https://publications.waset.org/abstracts/search?q=iceberg%20track%20lines" title=" iceberg track lines"> iceberg track lines</a>, <a href="https://publications.waset.org/abstracts/search?q=AIS%20messaging" title=" AIS messaging"> AIS messaging</a>, <a href="https://publications.waset.org/abstracts/search?q=international%20ice%20patrol" title=" international ice patrol"> international ice patrol</a>, <a href="https://publications.waset.org/abstracts/search?q=North%20American%20ice%20service" title=" North American ice service"> North American ice service</a>, <a href="https://publications.waset.org/abstracts/search?q=google%20earth" title=" google earth"> google earth</a>, <a href="https://publications.waset.org/abstracts/search?q=autonomous%20surface%20vessels" title=" autonomous surface vessels"> autonomous surface vessels</a> </p> <a href="https://publications.waset.org/abstracts/125523/enhanced-iceberg-information-dissemination-for-public-and-autonomous-maritime-use" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/125523.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">137</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11356</span> Revolutionary Solutions for Modeling and Visualization of Complex Software Systems</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jay%20Xiong">Jay Xiong</a>, <a href="https://publications.waset.org/abstracts/search?q=Li%20Lin"> Li Lin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Existing software modeling and visualization approaches using UML are outdated, which are outcomes of reductionism and the superposition principle that the whole of a system is the sum of its parts, so that with them all tasks of software modeling and visualization are performed linearly, partially, and locally. This paper introduces revolutionary solutions for modeling and visualization of complex software systems, which make complex software systems much easy to understand, test, and maintain. The solutions are based on complexity science, offering holistic, automatic, dynamic, virtual, and executable approaches about thousand times more efficient than the traditional ones. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=complex%20systems" title="complex systems">complex systems</a>, <a href="https://publications.waset.org/abstracts/search?q=software%20maintenance" title=" software maintenance"> software maintenance</a>, <a href="https://publications.waset.org/abstracts/search?q=software%20modeling" title=" software modeling"> software modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=software%20visualization" title=" software visualization"> software visualization</a> </p> <a href="https://publications.waset.org/abstracts/41451/revolutionary-solutions-for-modeling-and-visualization-of-complex-software-systems" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/41451.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">401</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11355</span> Alive Cemeteries with Augmented Reality and Semantic Web Technologies</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tam%C3%A1s%20Matuszka">Tamás Matuszka</a>, <a href="https://publications.waset.org/abstracts/search?q=Attila%20Kiss"> Attila Kiss</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Due the proliferation of smartphones in everyday use, several different outdoor navigation systems have become available. Since these smartphones are able to connect to the Internet, the users can obtain location-based information during the navigation as well. The users could interactively get to know the specifics of a particular area (for instance, ancient cultural area, Statue Park, cemetery) with the help of thus obtained information. In this paper, we present an Augmented Reality system which uses Semantic Web technologies and is based on the interaction between the user and the smartphone. The system allows navigating through a specific area and provides information and details about the sight an interactive manner. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=augmented%20reality" title="augmented reality">augmented reality</a>, <a href="https://publications.waset.org/abstracts/search?q=semantic%20web" title=" semantic web"> semantic web</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20computer%20interaction" title=" human computer interaction"> human computer interaction</a>, <a href="https://publications.waset.org/abstracts/search?q=mobile%20application" title=" mobile application"> mobile application</a> </p> <a href="https://publications.waset.org/abstracts/5418/alive-cemeteries-with-augmented-reality-and-semantic-web-technologies" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/5418.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">340</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11354</span> Performance Analysis of Geophysical Database Referenced Navigation: The Combination of Gravity Gradient and Terrain Using Extended Kalman Filter</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jisun%20Lee">Jisun Lee</a>, <a href="https://publications.waset.org/abstracts/search?q=Jay%20Hyoun%20Kwon"> Jay Hyoun Kwon</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As an alternative way to compensate the INS (inertial navigation system) error in non-GNSS (Global Navigation Satellite System) environment, geophysical database referenced navigation is being studied. In this study, both gravity gradient and terrain data were combined to complement the weakness of sole geophysical data as well as to improve the stability of the positioning. The main process to compensate the INS error using geophysical database was constructed on the basis of the EKF (Extended Kalman Filter). In detail, two type of combination method, centralized and decentralized filter, were applied to check the pros and cons of its algorithm and to find more robust results. The performance of each navigation algorithm was evaluated based on the simulation by supposing that the aircraft flies with precise geophysical DB and sensors above nine different trajectories. Especially, the results were compared to the ones from sole geophysical database referenced navigation to check the improvement due to a combination of the heterogeneous geophysical database. It was found that the overall navigation performance was improved, but not all trajectories generated better navigation result by the combination of gravity gradient with terrain data. Also, it was found that the centralized filter generally showed more stable results. It is because that the way to allocate the weight for the decentralized filter could not be optimized due to the local inconsistency of geophysical data. In the future, switching of geophysical data or combining different navigation algorithm are necessary to obtain more robust navigation results. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Extended%20Kalman%20Filter" title="Extended Kalman Filter">Extended Kalman Filter</a>, <a href="https://publications.waset.org/abstracts/search?q=geophysical%20database%20referenced%20navigation" title=" geophysical database referenced navigation"> geophysical database referenced navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=gravity%20gradient" title=" gravity gradient"> gravity gradient</a>, <a href="https://publications.waset.org/abstracts/search?q=terrain" title=" terrain "> terrain </a> </p> <a href="https://publications.waset.org/abstracts/67266/performance-analysis-of-geophysical-database-referenced-navigation-the-combination-of-gravity-gradient-and-terrain-using-extended-kalman-filter" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/67266.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">349</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11353</span> Analysis of Autonomous Orbit Determination for Lagrangian Navigation Constellation with Different Dynamical Models</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Gao%20Youtao">Gao Youtao</a>, <a href="https://publications.waset.org/abstracts/search?q=Zhao%20Tanran"> Zhao Tanran</a>, <a href="https://publications.waset.org/abstracts/search?q=Jin%20Bingyu"> Jin Bingyu</a>, <a href="https://publications.waset.org/abstracts/search?q=Xu%20Bo"> Xu Bo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Global navigation satellite system(GNSS) can deliver navigation information for spacecraft orbiting on low-Earth orbits and medium Earth orbits. However, the GNSS cannot navigate the spacecraft on high-Earth orbit or deep space probes effectively. With the deep space exploration becoming a hot spot of aerospace, the demand for a deep space satellite navigation system is becoming increasingly prominent. Many researchers discussed the feasibility and performance of a satellite navigation system on periodic orbits around the Earth-Moon libration points which can be called Lagrangian point satellite navigation system. Autonomous orbit determination (AOD) is an important performance for the Lagrangian point satellite navigation system. With this ability, the Lagrangian point satellite navigation system can reduce the dependency on ground stations. AOD also can greatly reduce total system cost and assure mission continuity. As the elliptical restricted three-body problem can describe the Earth-Moon system more accurately than the circular restricted three-body problem, we study the autonomous orbit determination of Lagrangian navigation constellation using only crosslink range based on elliptical restricted three body problem. Extended Kalman filter is used in the autonomous orbit determination. In order to compare the autonomous orbit determination results based on elliptical restricted three-body problem to the results of autonomous orbit determination based on circular restricted three-body problem, we give the autonomous orbit determination position errors of a navigation constellation include four satellites based on the circular restricted three-body problem. The simulation result shows that the Lagrangian navigation constellation can achieve long-term precise autonomous orbit determination using only crosslink range. In addition, the type of the libration point orbit will influence the autonomous orbit determination accuracy. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=extended%20Kalman%20filter" title="extended Kalman filter">extended Kalman filter</a>, <a href="https://publications.waset.org/abstracts/search?q=autonomous%20orbit%20determination" title=" autonomous orbit determination"> autonomous orbit determination</a>, <a href="https://publications.waset.org/abstracts/search?q=quasi-periodic%20orbit" title=" quasi-periodic orbit"> quasi-periodic orbit</a>, <a href="https://publications.waset.org/abstracts/search?q=navigation%20constellation" title=" navigation constellation"> navigation constellation</a> </p> <a href="https://publications.waset.org/abstracts/72040/analysis-of-autonomous-orbit-determination-for-lagrangian-navigation-constellation-with-different-dynamical-models" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72040.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">282</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11352</span> Evaluation of UI for 3D Visualization-Based Building Information Applications</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Monisha%20Pattanaik">Monisha Pattanaik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In scenarios where users have to work with large amounts of hierarchical data structures combined with visualizations (For example, Construction 3d Models, Manufacturing equipment's models, Gantt charts, Building Plans), the data structures have a high density in terms of consisting multiple parent nodes up to 50 levels and their siblings to descendants, therefore convey an immediate feeling of complexity. With customers moving to consumer-grade enterprise software, it is crucial to make sophisticated features made available to touch devices or smaller screen sizes. This paper evaluates the UI component that allows users to scroll through all deep density levels using a slider overlay on top of the hierarchy table, performing several actions to focus on one set of objects at any point in time. This overlay component also solves the problem of excessive horizontal scrolling of the entire table on a fixed pane for a hierarchical table. This component can be customized to navigate through parents, only siblings, or a specific component of the hierarchy only. The evaluation of the UI component was done by End Users of application and Human-Computer Interaction (HCI) experts to test the UI component's usability with statistical results and recommendations to handle complex hierarchical data visualizations. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=building%20information%20modeling" title="building information modeling">building information modeling</a>, <a href="https://publications.waset.org/abstracts/search?q=digital%20twin" title=" digital twin"> digital twin</a>, <a href="https://publications.waset.org/abstracts/search?q=navigation" title=" navigation"> navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=UI%20component" title=" UI component"> UI component</a>, <a href="https://publications.waset.org/abstracts/search?q=user%20interface" title=" user interface"> user interface</a>, <a href="https://publications.waset.org/abstracts/search?q=usability" title=" usability"> usability</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization" title=" visualization"> visualization</a> </p> <a href="https://publications.waset.org/abstracts/128914/evaluation-of-ui-for-3d-visualization-based-building-information-applications" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128914.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">138</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11351</span> Compass Bar: A Visualization Technique for Out-of-View-Objects in Head-Mounted Displays</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Alessandro%20Evangelista">Alessandro Evangelista</a>, <a href="https://publications.waset.org/abstracts/search?q=Vito%20M.%20Manghisi"> Vito M. Manghisi</a>, <a href="https://publications.waset.org/abstracts/search?q=Michele%20Gattullo"> Michele Gattullo</a>, <a href="https://publications.waset.org/abstracts/search?q=Enricoandrea%20Laviola"> Enricoandrea Laviola</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this work, we propose a custom visualization technique for Out-Of-View-Objects in Virtual and Augmented Reality applications using Head Mounted Displays. In the last two decades, Augmented Reality (AR) and Virtual Reality (VR) technologies experienced a remarkable growth of applications for navigation, interaction, and collaboration in different types of environments, real or virtual. Both environments can be potentially very complex, as they can include many virtual objects located in different places. Given the natural limitation of the human Field of View (about 210° horizontal and 150° vertical), humans cannot perceive objects outside this angular range. Moreover, despite recent technological advances in AR e VR Head-Mounted Displays (HMDs), these devices still suffer from a limited Field of View, especially regarding Optical See-Through displays, thus greatly amplifying the challenge of visualizing out-of-view objects. This problem is not negligible when the user needs to be aware of the number and the position of the out-of-view objects in the environment. For instance, during a maintenance operation on a construction site where virtual objects serve to improve the dangers' awareness. Providing such information can enhance the comprehension of the scene, enable fast navigation and focused search, and improve users' safety. In our research, we investigated how to represent out-of-view-objects in HMD User Interfaces (UI). Inspired by commercial video games such as Call of Duty Modern Warfare, we designed a customized Compass. By exploiting the Unity 3D graphics engine, we implemented our custom solution that can be used both in AR and VR environments. The Compass Bar consists of a graduated bar (in degrees) at the top center of the UI. The values of the bar range from -180 (far left) to +180 (far right), the zero is placed in front of the user. Two vertical lines on the bar show the amplitude of the user's field of view. Every virtual object within the scene is represented onto the compass bar as a specific color-coded proxy icon (a circular ring with a colored dot at its center). To provide the user with information about the distance, we implemented a specific algorithm that increases the size of the inner dot as the user approaches the virtual object (i.e., when the user reaches the object, the dot fills the ring). This visualization technique for out-of-view objects has some advantages. It allows users to be quickly aware of the number and the position of the virtual objects in the environment. For instance, if the compass bar displays the proxy icon at about +90, users will immediately know that the virtual object is to their right and so on. Furthermore, by having qualitative information about the distance, users can optimize their speed, thus gaining effectiveness in their work. Given the small size and position of the Compass Bar, our solution also helps lessening the occlusion problem thus increasing user acceptance and engagement. As soon as the lockdown measures will allow, we will carry out user-tests comparing this solution with other state-of-the-art existing ones such as 3D Radar, SidebARs and EyeSee360. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=augmented%20reality" title="augmented reality">augmented reality</a>, <a href="https://publications.waset.org/abstracts/search?q=situation%20awareness" title=" situation awareness"> situation awareness</a>, <a href="https://publications.waset.org/abstracts/search?q=virtual%20reality" title=" virtual reality"> virtual reality</a>, <a href="https://publications.waset.org/abstracts/search?q=visualization%20design" title=" visualization design"> visualization design</a> </p> <a href="https://publications.waset.org/abstracts/128619/compass-bar-a-visualization-technique-for-out-of-view-objects-in-head-mounted-displays" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/128619.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">127</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11350</span> Tactile Cues and Spatial Navigation in Mice</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rubaiyea%20Uddin">Rubaiyea Uddin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The hippocampus, located in the limbic system, is most commonly known for its role in memory and spatial navigation (as cited in Brain Reward and Pathways). It maintains an especially important role in specifically episodic and declarative memory. The hippocampus has also recently been linked to dopamine, the reward pathway’s primary neurotransmitter. Since research has found that dopamine also contributes to memory consolidation and hippocampal plasticity, this neurotransmitter is potentially responsible for contributing to the hippocampus’s role in memory formation. In this experiment we tested to see the effect of tactile cues on spatial navigation for eight different mice. We used a radial arm that had one designated 'reward' arm containing sucrose. The presence or absence of bedding was our tactile cue. We attempted to see if the memory of that cue would enhance the mice’s memory of having received the reward in that arm. The results from our study showed there was no significant response from the use of tactile cues on spatial navigation on our 129 mice. Tactile cues therefore do not influence spatial navigation. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mice" title="mice">mice</a>, <a href="https://publications.waset.org/abstracts/search?q=radial%20arm%20maze" title=" radial arm maze"> radial arm maze</a>, <a href="https://publications.waset.org/abstracts/search?q=memory" title=" memory"> memory</a>, <a href="https://publications.waset.org/abstracts/search?q=spatial%20navigation" title=" spatial navigation"> spatial navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=tactile%20cues" title=" tactile cues"> tactile cues</a>, <a href="https://publications.waset.org/abstracts/search?q=hippocampus" title=" hippocampus"> hippocampus</a>, <a href="https://publications.waset.org/abstracts/search?q=reward" title=" reward"> reward</a>, <a href="https://publications.waset.org/abstracts/search?q=sensory%20skills" title=" sensory skills"> sensory skills</a>, <a href="https://publications.waset.org/abstracts/search?q=Alzheimer%E2%80%99s" title=" Alzheimer’s"> Alzheimer’s</a>, <a href="https://publications.waset.org/abstracts/search?q=neurodegnerative%20disease" title=" neurodegnerative disease"> neurodegnerative disease</a> </p> <a href="https://publications.waset.org/abstracts/21710/tactile-cues-and-spatial-navigation-in-mice" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/21710.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">649</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">&lsaquo;</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=navigation%20information%20visualization&amp;page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=navigation%20information%20visualization&amp;page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=navigation%20information%20visualization&amp;page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=navigation%20information%20visualization&amp;page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=navigation%20information%20visualization&amp;page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=navigation%20information%20visualization&amp;page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=navigation%20information%20visualization&amp;page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=navigation%20information%20visualization&amp;page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=navigation%20information%20visualization&amp;page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=navigation%20information%20visualization&amp;page=379">379</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=navigation%20information%20visualization&amp;page=380">380</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=navigation%20information%20visualization&amp;page=2" rel="next">&rsaquo;</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10