CINXE.COM

Search results for: Yu Hongyang

<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: Yu Hongyang</title> <meta name="description" content="Search results for: Yu Hongyang"> <meta name="keywords" content="Yu Hongyang"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="Yu Hongyang" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="Yu Hongyang"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 11</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: Yu Hongyang</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">11</span> A Three-modal Authentication Method for Industrial Robots</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Luo%20Jiaoyang">Luo Jiaoyang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu%20Hongyang"> Yu Hongyang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we explore a method that can be used in the working scene of intelligent industrial robots to confirm the identity information of operators to ensure that the robot executes instructions in a sufficiently safe environment. This approach uses three information modalities, namely visible light, depth, and sound. We explored a variety of fusion modes for the three modalities and finally used the joint feature learning method to improve the performance of the model in the case of noise compared with the single-modal case, making the maximum noise in the experiment. It can also maintain an accuracy rate of more than 90%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=kinect" title=" kinect"> kinect</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=distance%20image" title=" distance image"> distance image</a> </p> <a href="https://publications.waset.org/abstracts/163879/a-three-modal-authentication-method-for-industrial-robots" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163879.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">79</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">10</span> An Online 3D Modeling Method Based on a Lossless Compression Algorithm</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiankang%20Wang">Jiankang Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongyang%20Yu"> Hongyang Yu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper proposes a portable online 3D modeling method. The method first utilizes a depth camera to collect data and compresses the depth data using a frame-by-frame lossless data compression method. The color image is encoded using the H.264 encoding format. After the cloud obtains the color image and depth image, a 3D modeling method based on bundlefusion is used to complete the 3D modeling. The results of this study indicate that this method has the characteristics of portability, online, and high efficiency and has a wide range of application prospects. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=3D%20reconstruction" title="3D reconstruction">3D reconstruction</a>, <a href="https://publications.waset.org/abstracts/search?q=bundlefusion" title=" bundlefusion"> bundlefusion</a>, <a href="https://publications.waset.org/abstracts/search?q=lossless%20compression" title=" lossless compression"> lossless compression</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20image" title=" depth image"> depth image</a> </p> <a href="https://publications.waset.org/abstracts/163266/an-online-3d-modeling-method-based-on-a-lossless-compression-algorithm" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163266.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">82</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">9</span> Defect Localization and Interaction on Surfaces with Projection Mapping and Gesture Recognition</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qiang%20Wang">Qiang Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongyang%20Yu"> Hongyang Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=MingRong%20Lai"> MingRong Lai</a>, <a href="https://publications.waset.org/abstracts/search?q=Miao%20Luo"> Miao Luo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper presents a method for accurately localizing and interacting with known surface defects by overlaying patterns onto real-world surfaces using a projection system. Given the world coordinates of the defects, we project corresponding patterns onto the surfaces, providing an intuitive visualization of the specific defect locations. To enable users to interact with and retrieve more information about individual defects, we implement a gesture recognition system based on a pruned and optimized version of YOLOv6. This lightweight model achieves an accuracy of 82.8% and is suitable for deployment on low-performance devices. Our approach demonstrates the potential for enhancing defect identification, inspection processes, and user interaction in various applications. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=defect%20localization" title="defect localization">defect localization</a>, <a href="https://publications.waset.org/abstracts/search?q=projection%20mapping" title=" projection mapping"> projection mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=gesture%20recognition" title=" gesture recognition"> gesture recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=YOLOv6" title=" YOLOv6"> YOLOv6</a> </p> <a href="https://publications.waset.org/abstracts/165856/defect-localization-and-interaction-on-surfaces-with-projection-mapping-and-gesture-recognition" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/165856.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">88</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">8</span> Parametric Template-Based 3D Reconstruction of the Human Body</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiahe%20Liu">Jiahe Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongyang%20Yu"> Hongyang Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Feng%20Qian"> Feng Qian</a>, <a href="https://publications.waset.org/abstracts/search?q=Miao%20Luo"> Miao Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Linhang%20Zhu"> Linhang Zhu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This study proposed a 3D human body reconstruction method, which integrates multi-view joint information into a set of joints and processes it with a parametric human body template. Firstly, we obtained human body image information captured from multiple perspectives. The multi-view information can avoid self-occlusion and occlusion problems during the reconstruction process. Then, we used the MvP algorithm to integrate multi-view joint information into a set of joints. Next, we used the parametric human body template SMPL-X to obtain more accurate three-dimensional human body reconstruction results. Compared with the traditional single-view parametric human body template reconstruction, this method significantly improved the accuracy and stability of the reconstruction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=parametric%20human%20body%20templates" title="parametric human body templates">parametric human body templates</a>, <a href="https://publications.waset.org/abstracts/search?q=reconstruction%20of%20the%20human%20body" title=" reconstruction of the human body"> reconstruction of the human body</a>, <a href="https://publications.waset.org/abstracts/search?q=multi-view" title=" multi-view"> multi-view</a>, <a href="https://publications.waset.org/abstracts/search?q=joint" title=" joint"> joint</a> </p> <a href="https://publications.waset.org/abstracts/173775/parametric-template-based-3d-reconstruction-of-the-human-body" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173775.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">79</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">7</span> Application of Smplify-X Algorithm with Enhanced Gender Classifier in 3D Human Pose Estimation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiahe%20Liu">Jiahe Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongyang%20Yu"> Hongyang Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Miao%20Luo"> Miao Luo</a>, <a href="https://publications.waset.org/abstracts/search?q=Feng%20Qian"> Feng Qian</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The widespread application of 3D human body reconstruction spans various fields. Smplify-X, an algorithm reliant on single-image input, employs three distinct body parameter templates, necessitating gender classification of individuals within the input image. Researchers employed a ResNet18 network to train a gender classifier within the Smplify-X framework, setting the threshold at 0.9, designating images falling below this threshold as having neutral gender. This model achieved 62.38% accurate predictions and 7.54% incorrect predictions. Our improvement involved refining the MobileNet network, resulting in a raised threshold of 0.97. Consequently, we attained 78.89% accurate predictions and a mere 0.2% incorrect predictions, markedly enhancing prediction precision and enabling more precise 3D human body reconstruction. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=SMPLX" title="SMPLX">SMPLX</a>, <a href="https://publications.waset.org/abstracts/search?q=mobileNet" title=" mobileNet"> mobileNet</a>, <a href="https://publications.waset.org/abstracts/search?q=gender%20classification" title=" gender classification"> gender classification</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20human%20reconstruction" title=" 3D human reconstruction"> 3D human reconstruction</a> </p> <a href="https://publications.waset.org/abstracts/183520/application-of-smplify-x-algorithm-with-enhanced-gender-classifier-in-3d-human-pose-estimation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/183520.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">99</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">6</span> Optimal Peer-to-Peer On-Orbit Refueling Mission Planning with Complex Constraints</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jing%20Yu">Jing Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongyang%20Liu"> Hongyang Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Dong%20Hao"> Dong Hao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> On-Orbit Refueling is of great significance in extending space crafts&#39; lifetime. The problem of minimum-fuel, time-fixed, Peer-to-Peer On-Orbit Refueling mission planning is addressed here with the particular aim of assigning fuel-insufficient satellites to the fuel-sufficient satellites and optimizing each rendezvous trajectory. Constraints including perturbation, communication link, sun illumination, hold points for different rendezvous phases, and sensor switching are considered. A planning model has established as well as a two-level solution method. The upper level deals with target assignment based on fuel equilibrium criterion, while the lower level solves constrained trajectory optimization using special maneuver strategies. Simulations show that the developed method could effectively resolve the Peer-to-Peer On-Orbit Refueling mission planning problem and deal with complex constraints. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=mission%20planning" title="mission planning">mission planning</a>, <a href="https://publications.waset.org/abstracts/search?q=orbital%20rendezvous" title=" orbital rendezvous"> orbital rendezvous</a>, <a href="https://publications.waset.org/abstracts/search?q=on-orbit%20refueling" title=" on-orbit refueling"> on-orbit refueling</a>, <a href="https://publications.waset.org/abstracts/search?q=space%20mission" title=" space mission"> space mission</a> </p> <a href="https://publications.waset.org/abstracts/82227/optimal-peer-to-peer-on-orbit-refueling-mission-planning-with-complex-constraints" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/82227.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">226</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">5</span> Identity Verification Based on Multimodal Machine Learning on Red Green Blue (RGB) Red Green Blue-Depth (RGB-D) Voice Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=LuoJiaoyang">LuoJiaoyang</a>, <a href="https://publications.waset.org/abstracts/search?q=Yu%20Hongyang"> Yu Hongyang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In this paper, we experimented with a new approach to multimodal identification using RGB, RGB-D and voice data. The multimodal combination of RGB and voice data has been applied in tasks such as emotion recognition and has shown good results and stability, and it is also the same in identity recognition tasks. We believe that the data of different modalities can enhance the effect of the model through mutual reinforcement. We try to increase the three modalities on the basis of the dual modalities and try to improve the effectiveness of the network by increasing the number of modalities. We also implemented the single-modal identification system separately, tested the data of these different modalities under clean and noisy conditions, and compared the performance with the multimodal model. In the process of designing the multimodal model, we tried a variety of different fusion strategies and finally chose the fusion method with the best performance. The experimental results show that the performance of the multimodal system is better than that of the single modality, especially in dealing with noise, and the multimodal system can achieve an average improvement of 5%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multimodal" title="multimodal">multimodal</a>, <a href="https://publications.waset.org/abstracts/search?q=three%20modalities" title=" three modalities"> three modalities</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB-D" title=" RGB-D"> RGB-D</a>, <a href="https://publications.waset.org/abstracts/search?q=identity%20verification" title=" identity verification"> identity verification</a> </p> <a href="https://publications.waset.org/abstracts/163265/identity-verification-based-on-multimodal-machine-learning-on-red-green-blue-rgb-red-green-blue-depth-rgb-d-voice-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/163265.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">4</span> RGB-D SLAM Algorithm Based on pixel level Dense Depth Map</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hao%20Zhang">Hao Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongyang%20Yu"> Hongyang Yu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Scale uncertainty is a well-known challenging problem in visual SLAM. Because RGB-D sensor provides depth information, RGB-D SLAM improves this scale uncertainty problem. However, due to the limitation of physical hardware, the depth map output by RGB-D sensor usually contains a large area of missing depth values. These missing depth information affect the accuracy and robustness of RGB-D SLAM. In order to reduce these effects, this paper completes the missing area of the depth map output by RGB-D sensor and then fuses the completed dense depth map into ORB SLAM2. By adding the process of obtaining pixel-level dense depth maps, a better RGB-D visual SLAM algorithm is finally obtained. In the process of obtaining dense depth maps, a deep learning model of indoor scenes is adopted. Experiments are conducted on public datasets and real-world environments of indoor scenes. Experimental results show that the proposed SLAM algorithm has better robustness than ORB SLAM2. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=RGB-D" title="RGB-D">RGB-D</a>, <a href="https://publications.waset.org/abstracts/search?q=SLAM" title=" SLAM"> SLAM</a>, <a href="https://publications.waset.org/abstracts/search?q=dense%20depth" title=" dense depth"> dense depth</a>, <a href="https://publications.waset.org/abstracts/search?q=depth%20map" title=" depth map"> depth map</a> </p> <a href="https://publications.waset.org/abstracts/147802/rgb-d-slam-algorithm-based-on-pixel-level-dense-depth-map" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147802.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">140</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">3</span> 3D Reconstruction of Human Body Based on Gender Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jiahe%20Liu">Jiahe Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongyang%20Yu"> Hongyang Yu</a>, <a href="https://publications.waset.org/abstracts/search?q=Feng%20Qian"> Feng Qian</a>, <a href="https://publications.waset.org/abstracts/search?q=Miao%20Luo"> Miao Luo</a> </p> <p class="card-text"><strong>Abstract:</strong></p> SMPL-X was a powerful parametric human body model that included male, neutral, and female models, with significant gender differences between these three models. During the process of 3D human body reconstruction, the correct selection of standard templates was crucial for obtaining accurate results. To address this issue, we developed an efficient gender classification algorithm to automatically select the appropriate template for 3D human body reconstruction. The key to this gender classification algorithm was the precise analysis of human body features. By using the SMPL-X model, the algorithm could detect and identify gender features of the human body, thereby determining which standard template should be used. The accuracy of this algorithm made the 3D reconstruction process more accurate and reliable, as it could adjust model parameters based on individual gender differences. SMPL-X and the related gender classification algorithm have brought important advancements to the field of 3D human body reconstruction. By accurately selecting standard templates, they have improved the accuracy of reconstruction and have broad potential in various application fields. These technologies continue to drive the development of the 3D reconstruction field, providing us with more realistic and accurate human body models. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=gender%20classification" title="gender classification">gender classification</a>, <a href="https://publications.waset.org/abstracts/search?q=joint%20detection" title=" joint detection"> joint detection</a>, <a href="https://publications.waset.org/abstracts/search?q=SMPL-X" title=" SMPL-X"> SMPL-X</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20reconstruction" title=" 3D reconstruction"> 3D reconstruction</a> </p> <a href="https://publications.waset.org/abstracts/173842/3d-reconstruction-of-human-body-based-on-gender-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173842.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">70</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">2</span> A Transformer-Based Approach for Multi-Human 3D Pose Estimation Using Color and Depth Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qiang%20Wang">Qiang Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongyang%20Yu"> Hongyang Yu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multi-human 3D pose estimation is a challenging task in computer vision, which aims to recover the 3D joint locations of multiple people from multi-view images. In contrast to traditional methods, which typically only use color (RGB) images as input, our approach utilizes both color and depth (D) information contained in RGB-D images. We also employ a transformer-based model as the backbone of our approach, which is able to capture long-range dependencies and has been shown to perform well on various sequence modeling tasks. Our method is trained and tested on the Carnegie Mellon University (CMU) Panoptic dataset, which contains a diverse set of indoor and outdoor scenes with multiple people in varying poses and clothing. We evaluate the performance of our model on the standard 3D pose estimation metrics of mean per-joint position error (MPJPE). Our results show that the transformer-based approach outperforms traditional methods and achieves competitive results on the CMU Panoptic dataset. We also perform an ablation study to understand the impact of different design choices on the overall performance of the model. In summary, our work demonstrates the effectiveness of using a transformer-based approach with RGB-D images for multi-human 3D pose estimation and has potential applications in real-world scenarios such as human-computer interaction, robotics, and augmented reality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-human%203D%20pose%20estimation" title="multi-human 3D pose estimation">multi-human 3D pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB-D%20images" title=" RGB-D images"> RGB-D images</a>, <a href="https://publications.waset.org/abstracts/search?q=transformer" title=" transformer"> transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20joint%20locations" title=" 3D joint locations"> 3D joint locations</a> </p> <a href="https://publications.waset.org/abstracts/162957/a-transformer-based-approach-for-multi-human-3d-pose-estimation-using-color-and-depth-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162957.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">80</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1</span> Accurate Positioning Method of Indoor Plastering Robot Based on Line Laser</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Guanqiao%20Wang">Guanqiao Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongyang%20Yu"> Hongyang Yu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There is a lot of repetitive work in the traditional construction industry. These repetitive tasks can significantly improve production efficiency by replacing manual tasks with robots. There- fore, robots appear more and more frequently in the construction industry. Navigation and positioning are very important tasks for construction robots, and the requirements for accuracy of positioning are very high. Traditional indoor robots mainly use radiofrequency or vision methods for positioning. Compared with ordinary robots, the indoor plastering robot needs to be positioned closer to the wall for wall plastering, so the requirements for construction positioning accuracy are higher, and the traditional navigation positioning method has a large error, which will cause the robot to move. Without the exact position, the wall cannot be plastered, or the error of plastering the wall is large. A new positioning method is proposed, which is assisted by line lasers and uses image processing-based positioning to perform more accurate positioning on the traditional positioning work. In actual work, filter, edge detection, Hough transform and other operations are performed on the images captured by the camera. Each time the position of the laser line is found, it is compared with the standard value, and the position of the robot is moved or rotated to complete the positioning work. The experimental results show that the actual positioning error is reduced to less than 0.5 mm by this accurate positioning method. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=indoor%20plastering%20robot" title="indoor plastering robot">indoor plastering robot</a>, <a href="https://publications.waset.org/abstracts/search?q=navigation" title=" navigation"> navigation</a>, <a href="https://publications.waset.org/abstracts/search?q=precise%20positioning" title=" precise positioning"> precise positioning</a>, <a href="https://publications.waset.org/abstracts/search?q=line%20laser" title=" line laser"> line laser</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20processing" title=" image processing"> image processing</a> </p> <a href="https://publications.waset.org/abstracts/147620/accurate-positioning-method-of-indoor-plastering-robot-based-on-line-laser" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/147620.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">148</span> </span> </div> </div> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">&copy; 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">&times;</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10