CINXE.COM
Search results for: pose estimates
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-P63WKM1TM1"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-P63WKM1TM1'); </script> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(55165297, "init", { clickmap:false, trackLinks:true, accurateTrackBounce:true, webvisor:false }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/55165297" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> <!-- Matomo --> <!-- End Matomo Code --> <title>Search results for: pose estimates</title> <meta name="description" content="Search results for: pose estimates"> <meta name="keywords" content="pose estimates"> <meta name="viewport" content="width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=1, user-scalable=no"> <meta charset="utf-8"> <link href="https://cdn.waset.org/favicon.ico" type="image/x-icon" rel="shortcut icon"> <link href="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/css/bootstrap.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/plugins/fontawesome/css/all.min.css" rel="stylesheet"> <link href="https://cdn.waset.org/static/css/site.css?v=150220211555" rel="stylesheet"> </head> <body> <header> <div class="container"> <nav class="navbar navbar-expand-lg navbar-light"> <a class="navbar-brand" href="https://waset.org"> <img src="https://cdn.waset.org/static/images/wasetc.png" alt="Open Science Research Excellence" title="Open Science Research Excellence" /> </a> <button class="d-block d-lg-none navbar-toggler ml-auto" type="button" data-toggle="collapse" data-target="#navbarMenu" aria-controls="navbarMenu" aria-expanded="false" aria-label="Toggle navigation"> <span class="navbar-toggler-icon"></span> </button> <div class="w-100"> <div class="d-none d-lg-flex flex-row-reverse"> <form method="get" action="https://waset.org/search" class="form-inline my-2 my-lg-0"> <input class="form-control mr-sm-2" type="search" placeholder="Search Conferences" value="pose estimates" name="q" aria-label="Search"> <button class="btn btn-light my-2 my-sm-0" type="submit"><i class="fas fa-search"></i></button> </form> </div> <div class="collapse navbar-collapse mt-1" id="navbarMenu"> <ul class="navbar-nav ml-auto align-items-center" id="mainNavMenu"> <li class="nav-item"> <a class="nav-link" href="https://waset.org/conferences" title="Conferences in 2024/2025/2026">Conferences</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/disciplines" title="Disciplines">Disciplines</a> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/committees" rel="nofollow">Committees</a> </li> <li class="nav-item dropdown"> <a class="nav-link dropdown-toggle" href="#" id="navbarDropdownPublications" role="button" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false"> Publications </a> <div class="dropdown-menu" aria-labelledby="navbarDropdownPublications"> <a class="dropdown-item" href="https://publications.waset.org/abstracts">Abstracts</a> <a class="dropdown-item" href="https://publications.waset.org">Periodicals</a> <a class="dropdown-item" href="https://publications.waset.org/archive">Archive</a> </div> </li> <li class="nav-item"> <a class="nav-link" href="https://waset.org/page/support" title="Support">Support</a> </li> </ul> </div> </div> </nav> </div> </header> <main> <div class="container mt-4"> <div class="row"> <div class="col-md-9 mx-auto"> <form method="get" action="https://publications.waset.org/abstracts/search"> <div id="custom-search-input"> <div class="input-group"> <i class="fas fa-search"></i> <input type="text" class="search-query" name="q" placeholder="Author, Title, Abstract, Keywords" value="pose estimates"> <input type="submit" class="btn_search" value="Search"> </div> </div> </form> </div> </div> <div class="row mt-3"> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Commenced</strong> in January 2007</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Frequency:</strong> Monthly</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Edition:</strong> International</div> </div> </div> <div class="col-sm-3"> <div class="card"> <div class="card-body"><strong>Paper Count:</strong> 1227</div> </div> </div> </div> <h1 class="mt-3 mb-3 text-center" style="font-size:1.6rem;">Search results for: pose estimates</h1> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1227</span> Real Time Multi Person Action Recognition Using Pose Estimates</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Aishrith%20Rao">Aishrith Rao</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human activity recognition is an important aspect of video analytics, and many approaches have been recommended to enable action recognition. In this approach, the model is used to identify the action of the multiple people in the frame and classify them accordingly. A few approaches use RNNs and 3D CNNs, which are computationally expensive and cannot be trained with the small datasets which are currently available. Multi-person action recognition has been performed in order to understand the positions and action of people present in the video frame. The size of the video frame can be adjusted as a hyper-parameter depending on the hardware resources available. OpenPose has been used to calculate pose estimate using CNN to produce heap-maps, one of which provides skeleton features, which are basically joint features. The features are then extracted, and a classification algorithm can be applied to classify the action. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20activity%20recognition" title="human activity recognition">human activity recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimates" title=" pose estimates"> pose estimates</a>, <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title=" convolutional neural networks"> convolutional neural networks</a> </p> <a href="https://publications.waset.org/abstracts/127872/real-time-multi-person-action-recognition-using-pose-estimates" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/127872.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">139</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1226</span> Online Pose Estimation and Tracking Approach with Siamese Region Proposal Network</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Cheng%20Fang">Cheng Fang</a>, <a href="https://publications.waset.org/abstracts/search?q=Lingwei%20Quan"> Lingwei Quan</a>, <a href="https://publications.waset.org/abstracts/search?q=Cunyue%20Lu"> Cunyue Lu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human pose estimation and tracking are to accurately identify and locate the positions of human joints in the video. It is a computer vision task which is of great significance for human motion recognition, behavior understanding and scene analysis. There has been remarkable progress on human pose estimation in recent years. However, more researches are needed for human pose tracking especially for online tracking. In this paper, a framework, called PoseSRPN, is proposed for online single-person pose estimation and tracking. We use Siamese network attaching a pose estimation branch to incorporate Single-person Pose Tracking (SPT) and Visual Object Tracking (VOT) into one framework. The pose estimation branch has a simple network structure that replaces the complex upsampling and convolution network structure with deconvolution. By augmenting the loss of fully convolutional Siamese network with the pose estimation task, pose estimation and tracking can be trained in one stage. Once trained, PoseSRPN only relies on a single bounding box initialization and producing human joints location. The experimental results show that while maintaining the good accuracy of pose estimation on COCO and PoseTrack datasets, the proposed method achieves a speed of 59 frame/s, which is superior to other pose tracking frameworks. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title="computer vision">computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimation" title=" pose estimation"> pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20tracking" title=" pose tracking"> pose tracking</a>, <a href="https://publications.waset.org/abstracts/search?q=Siamese%20network" title=" Siamese network"> Siamese network</a> </p> <a href="https://publications.waset.org/abstracts/112839/online-pose-estimation-and-tracking-approach-with-siamese-region-proposal-network" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112839.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">153</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1225</span> Deep Learning Based 6D Pose Estimation for Bin-Picking Using 3D Point Clouds</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hesheng%20Wang">Hesheng Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Haoyu%20Wang"> Haoyu Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Chungang%20Zhuang"> Chungang Zhuang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Estimating the 6D pose of objects is a core step for robot bin-picking tasks. The problem is that various objects are usually randomly stacked with heavy occlusion in real applications. In this work, we propose a method to regress 6D poses by predicting three points for each object in the 3D point cloud through deep learning. To solve the ambiguity of symmetric pose, we propose a labeling method to help the network converge better. Based on the predicted pose, an iterative method is employed for pose optimization. In real-world experiments, our method outperforms the classical approach in both precision and recall. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pose%20estimation" title="pose estimation">pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=point%20cloud" title=" point cloud"> point cloud</a>, <a href="https://publications.waset.org/abstracts/search?q=bin-picking" title=" bin-picking"> bin-picking</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20computer%20vision" title=" 3D computer vision"> 3D computer vision</a> </p> <a href="https://publications.waset.org/abstracts/132349/deep-learning-based-6d-pose-estimation-for-bin-picking-using-3d-point-clouds" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/132349.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">161</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1224</span> The Effect of Non-Normality on CB-SEM and PLS-SEM Path Estimates</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Z.%20Jannoo">Z. Jannoo</a>, <a href="https://publications.waset.org/abstracts/search?q=B.%20W.%20Yap"> B. W. Yap</a>, <a href="https://publications.waset.org/abstracts/search?q=N.%20Auchoybur"> N. Auchoybur</a>, <a href="https://publications.waset.org/abstracts/search?q=M.%20A.%20Lazim"> M. A. Lazim</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The two common approaches to Structural Equation Modeling (SEM) are the Covariance-Based SEM (CB-SEM) and Partial Least Squares SEM (PLS-SEM). There is much debate on the performance of CB-SEM and PLS-SEM for small sample size and when distributions are non-normal. This study evaluates the performance of CB-SEM and PLS-SEM under normality and non-normality conditions via a simulation. Monte Carlo Simulation in R programming language was employed to generate data based on the theoretical model with one endogenous and four exogenous variables. Each latent variable has three indicators. For normal distributions, CB-SEM estimates were found to be inaccurate for small sample size while PLS-SEM could produce the path estimates. Meanwhile, for a larger sample size, CB-SEM estimates have lower variability compared to PLS-SEM. Under non-normality, CB-SEM path estimates were inaccurate for small sample size. However, CB-SEM estimates are more accurate than those of PLS-SEM for sample size of 50 and above. The PLS-SEM estimates are not accurate unless sample size is very large. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=CB-SEM" title="CB-SEM">CB-SEM</a>, <a href="https://publications.waset.org/abstracts/search?q=Monte%20Carlo%20simulation" title=" Monte Carlo simulation"> Monte Carlo simulation</a>, <a href="https://publications.waset.org/abstracts/search?q=normality%20conditions" title=" normality conditions"> normality conditions</a>, <a href="https://publications.waset.org/abstracts/search?q=non-normality" title=" non-normality"> non-normality</a>, <a href="https://publications.waset.org/abstracts/search?q=PLS-SEM" title=" PLS-SEM"> PLS-SEM</a> </p> <a href="https://publications.waset.org/abstracts/2399/the-effect-of-non-normality-on-cb-sem-and-pls-sem-path-estimates" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/2399.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">410</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1223</span> Critical Accounting Estimates and Transparency in Financial Reporting: An Observation Financial Reporting under US GAAP</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ahmed%20Shaik">Ahmed Shaik</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Estimates are very critical in accounting and Financial Reporting cannot be complete without these estimates. There is a long list of accounting estimates that are required to be made to compute Net Income and to determine the value of assets and liabilities. To name a few, valuation of inventory, depreciation, valuation of goodwill, provision for bad debts and estimated warranties, etc. require the use of different valuation models and forecasts. Different business entities under the same industry may use different approaches to measure the value of financial items being reported in Income Statement and Balance Sheet. The disclosure notes do not provide enough details of the approach used by a business entity to arrive at the value of a financial item. Lack of details in the disclosure notes makes it difficult to compare the financial performance of one business entity with the other in the same industry. This paper is an attempt to identify the lack of enough information about accounting estimates in disclosure notes, the impact of the absence of details of accounting estimates on the comparability of financial data and financial analysis. An attempt is made to suggest the detailed disclosure while taking care of the cost and benefit of making such disclosure. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accounting%20estimates" title="accounting estimates">accounting estimates</a>, <a href="https://publications.waset.org/abstracts/search?q=disclosure%20notes" title=" disclosure notes"> disclosure notes</a>, <a href="https://publications.waset.org/abstracts/search?q=financial%20reporting" title=" financial reporting"> financial reporting</a>, <a href="https://publications.waset.org/abstracts/search?q=transparency" title=" transparency"> transparency</a> </p> <a href="https://publications.waset.org/abstracts/84423/critical-accounting-estimates-and-transparency-in-financial-reporting-an-observation-financial-reporting-under-us-gaap" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/84423.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">200</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1222</span> Pose Normalization Network for Object Classification</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bingquan%20Shen">Bingquan Shen</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Convolutional Neural Networks (CNN) have demonstrated their effectiveness in synthesizing 3D views of object instances at various viewpoints. Given the problem where one have limited viewpoints of a particular object for classification, we present a pose normalization architecture to transform the object to existing viewpoints in the training dataset before classification to yield better classification performance. We have demonstrated that this Pose Normalization Network (PNN) can capture the style of the target object and is able to re-render it to a desired viewpoint. Moreover, we have shown that the PNN improves the classification result for the 3D chairs dataset and ShapeNet airplanes dataset when given only images at limited viewpoint, as compared to a CNN baseline. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20neural%20networks" title="convolutional neural networks">convolutional neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=object%20classification" title=" object classification"> object classification</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20normalization" title=" pose normalization"> pose normalization</a>, <a href="https://publications.waset.org/abstracts/search?q=viewpoint%20invariant" title=" viewpoint invariant"> viewpoint invariant</a> </p> <a href="https://publications.waset.org/abstracts/56852/pose-normalization-network-for-object-classification" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/56852.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">352</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1221</span> Facial Pose Classification Using Hilbert Space Filling Curve and Multidimensional Scaling</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Mekam%C4%B1%20Hayet">Mekamı Hayet</a>, <a href="https://publications.waset.org/abstracts/search?q=Bounoua%20Nacer"> Bounoua Nacer</a>, <a href="https://publications.waset.org/abstracts/search?q=Benabderrahmane%20Sidahmed"> Benabderrahmane Sidahmed</a>, <a href="https://publications.waset.org/abstracts/search?q=Taleb%20Ahmed"> Taleb Ahmed</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Pose estimation is an important task in computer vision. Though the majority of the existing solutions provide good accuracy results, they are often overly complex and computationally expensive. In this perspective, we propose the use of dimensionality reduction techniques to address the problem of facial pose estimation. Firstly, a face image is converted into one-dimensional time series using Hilbert space filling curve, then the approach converts these time series data to a symbolic representation. Furthermore, a distance matrix is calculated between symbolic series of an input learning dataset of images, to generate classifiers of frontal vs. profile face pose. The proposed method is evaluated with three public datasets. Experimental results have shown that our approach is able to achieve a correct classification rate exceeding 97% with K-NN algorithm. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title="machine learning">machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=pattern%20recognition" title=" pattern recognition"> pattern recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=facial%20pose%20classification" title=" facial pose classification"> facial pose classification</a>, <a href="https://publications.waset.org/abstracts/search?q=time%20series" title=" time series "> time series </a> </p> <a href="https://publications.waset.org/abstracts/33324/facial-pose-classification-using-hilbert-space-filling-curve-and-multidimensional-scaling" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/33324.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">350</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1220</span> A New Criterion Using Pose and Shape of Objects for Collision Risk Estimation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=DoHyeung%20Kim">DoHyeung Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=DaeHee%20Seo"> DaeHee Seo</a>, <a href="https://publications.waset.org/abstracts/search?q=ByungDoo%20Kim"> ByungDoo Kim</a>, <a href="https://publications.waset.org/abstracts/search?q=ByungGil%20Lee"> ByungGil Lee</a> </p> <p class="card-text"><strong>Abstract:</strong></p> As many recent researches being implemented in aviation and maritime aspects, strong doubts have been raised concerning the reliability of the estimation of collision risk. It is shown that using position and velocity of objects can lead to imprecise results. In this paper, therefore, a new approach to the estimation of collision risks using pose and shape of objects is proposed. Simulation results are presented validating the accuracy of the new criterion to adapt to collision risk algorithm based on fuzzy logic. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=collision%20risk" title="collision risk">collision risk</a>, <a href="https://publications.waset.org/abstracts/search?q=pose" title=" pose"> pose</a>, <a href="https://publications.waset.org/abstracts/search?q=shape" title=" shape"> shape</a>, <a href="https://publications.waset.org/abstracts/search?q=fuzzy%20logic" title=" fuzzy logic"> fuzzy logic</a> </p> <a href="https://publications.waset.org/abstracts/1474/a-new-criterion-using-pose-and-shape-of-objects-for-collision-risk-estimation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/1474.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">529</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1219</span> Adversarial Disentanglement Using Latent Classifier for Pose-Independent Representation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Hamed%20Alqahtani">Hamed Alqahtani</a>, <a href="https://publications.waset.org/abstracts/search?q=Manolya%20Kavakli-Thorne"> Manolya Kavakli-Thorne</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The large pose discrepancy is one of the critical challenges in face recognition during video surveillance. Due to the entanglement of pose attributes with identity information, the conventional approaches for pose-independent representation lack in providing quality results in recognizing largely posed faces. In this paper, we propose a practical approach to disentangle the pose attribute from the identity information followed by synthesis of a face using a classifier network in latent space. The proposed approach employs a modified generative adversarial network framework consisting of an encoder-decoder structure embedded with a classifier in manifold space for carrying out factorization on the latent encoding. It can be further generalized to other face and non-face attributes for real-life video frames containing faces with significant attribute variations. Experimental results and comparison with state of the art in the field prove that the learned representation of the proposed approach synthesizes more compelling perceptual images through a combination of adversarial and classification losses. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=disentanglement" title="disentanglement">disentanglement</a>, <a href="https://publications.waset.org/abstracts/search?q=face%20detection" title=" face detection"> face detection</a>, <a href="https://publications.waset.org/abstracts/search?q=generative%20adversarial%20networks" title=" generative adversarial networks"> generative adversarial networks</a>, <a href="https://publications.waset.org/abstracts/search?q=video%20surveillance" title=" video surveillance"> video surveillance</a> </p> <a href="https://publications.waset.org/abstracts/108319/adversarial-disentanglement-using-latent-classifier-for-pose-independent-representation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/108319.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">129</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1218</span> Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Daohong%20Yang">Daohong Yang</a>, <a href="https://publications.waset.org/abstracts/search?q=Xiang%20Zhang"> Xiang Zhang</a>, <a href="https://publications.waset.org/abstracts/search?q=Lei%20Li"> Lei Li</a>, <a href="https://publications.waset.org/abstracts/search?q=Wanting%20Zhou"> Wanting Zhou</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=error%20estimation" title="error estimation">error estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=constant%20acceleration%20motion%20model" title=" constant acceleration motion model"> constant acceleration motion model</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimation" title=" pose estimation"> pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=visual%20SLAM" title=" visual SLAM"> visual SLAM</a> </p> <a href="https://publications.waset.org/abstracts/164599/acceleration-based-motion-model-for-visual-simultaneous-localization-and-mapping" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/164599.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">94</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1217</span> On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=N.%20R.%20N.%20Idris">N. R. N. Idris</a>, <a href="https://publications.waset.org/abstracts/search?q=S.%20Baharom"> S. Baharom </a> </p> <p class="card-text"><strong>Abstract:</strong></p> A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=aggregate%20data" title="aggregate data">aggregate data</a>, <a href="https://publications.waset.org/abstracts/search?q=combined-level%20data" title=" combined-level data"> combined-level data</a>, <a href="https://publications.waset.org/abstracts/search?q=individual%20patient%20data" title=" individual patient data"> individual patient data</a>, <a href="https://publications.waset.org/abstracts/search?q=meta-analysis" title=" meta-analysis"> meta-analysis</a> </p> <a href="https://publications.waset.org/abstracts/8777/on-pooling-different-levels-of-data-in-estimating-parameters-of-continuous-meta-analysis" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/8777.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">375</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1216</span> Robust Inference with a Skew T Distribution</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=M.%20Qamarul%20Islam">M. Qamarul Islam</a>, <a href="https://publications.waset.org/abstracts/search?q=Ergun%20Dogan"> Ergun Dogan</a>, <a href="https://publications.waset.org/abstracts/search?q=Mehmet%20Yazici"> Mehmet Yazici</a> </p> <p class="card-text"><strong>Abstract:</strong></p> There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=least%20square%20estimates" title="least square estimates">least square estimates</a>, <a href="https://publications.waset.org/abstracts/search?q=linear%20regression" title=" linear regression"> linear regression</a>, <a href="https://publications.waset.org/abstracts/search?q=maximum%20likelihood%20estimates" title=" maximum likelihood estimates"> maximum likelihood estimates</a>, <a href="https://publications.waset.org/abstracts/search?q=modified%20maximum%20likelihood%20method" title=" modified maximum likelihood method"> modified maximum likelihood method</a>, <a href="https://publications.waset.org/abstracts/search?q=non-normality" title=" non-normality"> non-normality</a>, <a href="https://publications.waset.org/abstracts/search?q=robustness" title=" robustness"> robustness</a> </p> <a href="https://publications.waset.org/abstracts/35043/robust-inference-with-a-skew-t-distribution" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/35043.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">397</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1215</span> Estimating Current Suicide Rates Using Google Trends</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ladislav%20Kristoufek">Ladislav Kristoufek</a>, <a href="https://publications.waset.org/abstracts/search?q=Helen%20Susannah%20Moat"> Helen Susannah Moat</a>, <a href="https://publications.waset.org/abstracts/search?q=Tobias%20Preis"> Tobias Preis</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Data on the number of people who have committed suicide tends to be reported with a substantial time lag of around two years. We examine whether online activity measured by Google searches can help us improve estimates of the number of suicide occurrences in England before official figures are released. Specifically, we analyse how data on the number of Google searches for the terms “depression” and “suicide” relate to the number of suicides between 2004 and 2013. We find that estimates drawing on Google data are significantly better than estimates using previous suicide data alone. We show that a greater number of searches for the term “depression” is related to fewer suicides, whereas a greater number of searches for the term “suicide” is related to more suicides. Data on suicide related search behaviour can be used to improve current estimates of the number of suicide occurrences. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=nowcasting" title="nowcasting">nowcasting</a>, <a href="https://publications.waset.org/abstracts/search?q=search%20data" title=" search data"> search data</a>, <a href="https://publications.waset.org/abstracts/search?q=Google%20Trends" title=" Google Trends"> Google Trends</a>, <a href="https://publications.waset.org/abstracts/search?q=official%20statistics" title=" official statistics"> official statistics</a> </p> <a href="https://publications.waset.org/abstracts/59622/estimating-current-suicide-rates-using-google-trends" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/59622.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">357</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1214</span> Spatiotemporal Neural Network for Video-Based Pose Estimation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Bin%20Ji">Bin Ji</a>, <a href="https://publications.waset.org/abstracts/search?q=Kai%20Xu"> Kai Xu</a>, <a href="https://publications.waset.org/abstracts/search?q=Shunyu%20Yao"> Shunyu Yao</a>, <a href="https://publications.waset.org/abstracts/search?q=Jingjing%20Liu"> Jingjing Liu</a>, <a href="https://publications.waset.org/abstracts/search?q=Ye%20Pan"> Ye Pan</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Human pose estimation is a popular research area in computer vision for its important application in human-machine interface. In recent years, 2D human pose estimation based on convolution neural network has got great progress and development. However, in more and more practical applications, people often need to deal with tasks based on video. It’s not far-fetched for us to consider how to combine the spatial and temporal information together to achieve a balance between computing cost and accuracy. To address this issue, this study proposes a new spatiotemporal model, namely Spatiotemporal Net (STNet) to combine both temporal and spatial information more rationally. As a result, the predicted keypoints heatmap is potentially more accurate and spatially more precise. Under the condition of ensuring the recognition accuracy, the algorithm deal with spatiotemporal series in a decoupled way, which greatly reduces the computation of the model, thus reducing the resource consumption. This study demonstrate the effectiveness of our network over the Penn Action Dataset, and the results indicate superior performance of our network over the existing methods. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=convolutional%20long%20short-term%20memory" title="convolutional long short-term memory">convolutional long short-term memory</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20pose%20estimation" title=" human pose estimation"> human pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=spatiotemporal%20series" title=" spatiotemporal series"> spatiotemporal series</a> </p> <a href="https://publications.waset.org/abstracts/129867/spatiotemporal-neural-network-for-video-based-pose-estimation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/129867.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">148</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1213</span> Pose-Dependency of Machine Tool Structures: Appearance, Consequences, and Challenges for Lightweight Large-Scale Machines</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20Apprich">S. Apprich</a>, <a href="https://publications.waset.org/abstracts/search?q=F.%20Wulle"> F. Wulle</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Lechler"> A. Lechler</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Pott"> A. Pott</a>, <a href="https://publications.waset.org/abstracts/search?q=A.%20Verl"> A. Verl</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Large-scale machine tools for the manufacturing of large work pieces, e.g. blades, casings or gears for wind turbines, feature pose-dependent dynamic behavior. Small structural damping coefficients lead to long decay times for structural vibrations that have negative impacts on the production process. Typically, these vibrations are handled by increasing the stiffness of the structure by adding mass. That is counterproductive to the needs of sustainable manufacturing as it leads to higher resource consumption both in material and in energy. Recent research activities have led to higher resource efficiency by radical mass reduction that rely on control-integrated active vibration avoidance and damping methods. These control methods depend on information describing the dynamic behavior of the controlled machine tools in order to tune the avoidance or reduction method parameters according to the current state of the machine. The paper presents the appearance, consequences and challenges of the pose-dependent dynamic behavior of lightweight large-scale machine tool structures in production. The paper starts with the theoretical introduction of the challenges of lightweight machine tool structures resulting from reduced stiffness. The statement of the pose-dependent dynamic behavior is corroborated by the results of the experimental modal analysis of a lightweight test structure. Afterwards, the consequences of the pose-dependent dynamic behavior of lightweight machine tool structures for the use of active control and vibration reduction methods are explained. Based on the state of the art on pose-dependent dynamic machine tool models and the modal investigation of an FE-model of the lightweight test structure, the criteria for a pose-dependent model for use in vibration reduction are derived. The description of the approach for a general pose-dependent model of the dynamic behavior of large lightweight machine tools that provides the necessary input to the aforementioned vibration avoidance and reduction methods to properly tackle machine vibrations is the outlook of the paper. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=dynamic%20behavior" title="dynamic behavior">dynamic behavior</a>, <a href="https://publications.waset.org/abstracts/search?q=lightweight" title=" lightweight"> lightweight</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20tool" title=" machine tool"> machine tool</a>, <a href="https://publications.waset.org/abstracts/search?q=pose-dependency" title=" pose-dependency"> pose-dependency</a> </p> <a href="https://publications.waset.org/abstracts/31447/pose-dependency-of-machine-tool-structures-appearance-consequences-and-challenges-for-lightweight-large-scale-machines" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31447.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">459</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1212</span> Intelligent Human Pose Recognition Based on EMG Signal Analysis and Machine 3D Model</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Si%20Chen">Si Chen</a>, <a href="https://publications.waset.org/abstracts/search?q=Quanhong%20Jiang"> Quanhong Jiang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> In the increasingly mature posture recognition technology, human movement information is widely used in sports rehabilitation, human-computer interaction, medical health, human posture assessment, and other fields today; this project uses the most original ideas; it is proposed to use the collection equipment for the collection of myoelectric data, reflect the muscle posture change on a degree of freedom through data processing, carry out data-muscle three-dimensional model joint adjustment, and realize basic pose recognition. Based on this, bionic aids or medical rehabilitation equipment can be further developed with the help of robotic arms and cutting-edge technology, which has a bright future and unlimited development space. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=pose%20recognition" title="pose recognition">pose recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20animation" title=" 3D animation"> 3D animation</a>, <a href="https://publications.waset.org/abstracts/search?q=electromyography" title=" electromyography"> electromyography</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=bionics" title=" bionics"> bionics</a> </p> <a href="https://publications.waset.org/abstracts/166827/intelligent-human-pose-recognition-based-on-emg-signal-analysis-and-machine-3d-model" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/166827.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">79</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1211</span> A Transformer-Based Approach for Multi-Human 3D Pose Estimation Using Color and Depth Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Qiang%20Wang">Qiang Wang</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongyang%20Yu"> Hongyang Yu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Multi-human 3D pose estimation is a challenging task in computer vision, which aims to recover the 3D joint locations of multiple people from multi-view images. In contrast to traditional methods, which typically only use color (RGB) images as input, our approach utilizes both color and depth (D) information contained in RGB-D images. We also employ a transformer-based model as the backbone of our approach, which is able to capture long-range dependencies and has been shown to perform well on various sequence modeling tasks. Our method is trained and tested on the Carnegie Mellon University (CMU) Panoptic dataset, which contains a diverse set of indoor and outdoor scenes with multiple people in varying poses and clothing. We evaluate the performance of our model on the standard 3D pose estimation metrics of mean per-joint position error (MPJPE). Our results show that the transformer-based approach outperforms traditional methods and achieves competitive results on the CMU Panoptic dataset. We also perform an ablation study to understand the impact of different design choices on the overall performance of the model. In summary, our work demonstrates the effectiveness of using a transformer-based approach with RGB-D images for multi-human 3D pose estimation and has potential applications in real-world scenarios such as human-computer interaction, robotics, and augmented reality. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=multi-human%203D%20pose%20estimation" title="multi-human 3D pose estimation">multi-human 3D pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=RGB-D%20images" title=" RGB-D images"> RGB-D images</a>, <a href="https://publications.waset.org/abstracts/search?q=transformer" title=" transformer"> transformer</a>, <a href="https://publications.waset.org/abstracts/search?q=3D%20joint%20locations" title=" 3D joint locations"> 3D joint locations</a> </p> <a href="https://publications.waset.org/abstracts/162957/a-transformer-based-approach-for-multi-human-3d-pose-estimation-using-color-and-depth-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/162957.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">80</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1210</span> A Unified Deep Framework for Joint 3d Pose Estimation and Action Recognition from a Single Color Camera</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Huy%20Hieu%20Pham">Huy Hieu Pham</a>, <a href="https://publications.waset.org/abstracts/search?q=Houssam%20Salmane"> Houssam Salmane</a>, <a href="https://publications.waset.org/abstracts/search?q=Louahdi%20Khoudour"> Louahdi Khoudour</a>, <a href="https://publications.waset.org/abstracts/search?q=Alain%20Crouzil"> Alain Crouzil</a>, <a href="https://publications.waset.org/abstracts/search?q=Pablo%20Zegers"> Pablo Zegers</a>, <a href="https://publications.waset.org/abstracts/search?q=Sergio%20Velastin"> Sergio Velastin</a> </p> <p class="card-text"><strong>Abstract:</strong></p> We present a deep learning-based multitask framework for joint 3D human pose estimation and action recognition from color video sequences. Our approach proceeds along two stages. In the first, we run a real-time 2D pose detector to determine the precise pixel location of important key points of the body. A two-stream neural network is then designed and trained to map detected 2D keypoints into 3D poses. In the second, we deploy the Efficient Neural Architecture Search (ENAS) algorithm to find an optimal network architecture that is used for modeling the Spatio-temporal evolution of the estimated 3D poses via an image-based intermediate representation and performing action recognition. Experiments on Human3.6M, Microsoft Research Redmond (MSR) Action3D, and Stony Brook University (SBU) Kinect Interaction datasets verify the effectiveness of the proposed method on the targeted tasks. Moreover, we show that our method requires a low computational budget for training and inference. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=human%20action%20recognition" title="human action recognition">human action recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=pose%20estimation" title=" pose estimation"> pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=D-CNN" title=" D-CNN"> D-CNN</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a> </p> <a href="https://publications.waset.org/abstracts/115449/a-unified-deep-framework-for-joint-3d-pose-estimation-and-action-recognition-from-a-single-color-camera" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/115449.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">145</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1209</span> Uncertainty Estimation in Neural Networks through Transfer Learning</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Ashish%20James">Ashish James</a>, <a href="https://publications.waset.org/abstracts/search?q=Anusha%20James"> Anusha James</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The impressive predictive performance of deep learning techniques on a wide range of tasks has led to its widespread use. Estimating the confidence of these predictions is paramount for improving the safety and reliability of such systems. However, the uncertainty estimates provided by neural networks (NNs) tend to be overconfident and unreasonable. Ensemble of NNs typically produce good predictions but uncertainty estimates tend to be inconsistent. Inspired by these, this paper presents a framework that can quantitatively estimate the uncertainties by leveraging the advances in transfer learning through slight modification to the existing training pipelines. This promising algorithm is developed with an intention of deployment in real world problems which already boast a good predictive performance by reusing those pretrained models. The idea is to capture the behavior of the trained NNs for the base task by augmenting it with the uncertainty estimates from a supplementary network. A series of experiments with known and unknown distributions show that the proposed approach produces well calibrated uncertainty estimates with high quality predictions. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=uncertainty%20estimation" title="uncertainty estimation">uncertainty estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=neural%20networks" title=" neural networks"> neural networks</a>, <a href="https://publications.waset.org/abstracts/search?q=transfer%20learning" title=" transfer learning"> transfer learning</a>, <a href="https://publications.waset.org/abstracts/search?q=regression" title=" regression"> regression</a> </p> <a href="https://publications.waset.org/abstracts/153501/uncertainty-estimation-in-neural-networks-through-transfer-learning" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/153501.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">135</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1208</span> The Relationship between Human Pose and Intention to Fire a Handgun</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Joshua%20van%20Staden">Joshua van Staden</a>, <a href="https://publications.waset.org/abstracts/search?q=Dane%20Brown"> Dane Brown</a>, <a href="https://publications.waset.org/abstracts/search?q=Karen%20Bradshaw"> Karen Bradshaw</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Gun violence is a significant problem in modern-day society. Early detection of carried handguns through closed-circuit television (CCTV) can aid in preventing potential gun violence. However, CCTV operators have a limited attention span. Machine learning approaches to automating the detection of dangerous gun carriers provide a way to aid CCTV operators in identifying these individuals. This study provides insight into the relationship between human key points extracted using human pose estimation (HPE) and their intention to fire a weapon. We examine the feature importance of each keypoint and their correlations. We use principal component analysis (PCA) to reduce the feature space and optimize detection. Finally, we run a set of classifiers to determine what form of classifier performs well on this data. We find that hips, shoulders, and knees tend to be crucial aspects of the human pose when making these predictions. Furthermore, the horizontal position plays a larger role than the vertical position. Of the 66 key points, nine principal components could be used to make nonlinear classifications with 86% accuracy. Furthermore, linear classifications could be done with 85% accuracy, showing that there is a degree of linearity in the data. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=feature%20engineering" title="feature engineering">feature engineering</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20pose" title=" human pose"> human pose</a>, <a href="https://publications.waset.org/abstracts/search?q=machine%20learning" title=" machine learning"> machine learning</a>, <a href="https://publications.waset.org/abstracts/search?q=security" title=" security"> security</a> </p> <a href="https://publications.waset.org/abstracts/155235/the-relationship-between-human-pose-and-intention-to-fire-a-handgun" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/155235.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">93</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1207</span> Light-Weight Network for Real-Time Pose Estimation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Jianghao%20Hu">Jianghao Hu</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongyu%20Wang"> Hongyu Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The effective and efficient human pose estimation algorithm is an important task for real-time human pose estimation on mobile devices. This paper proposes a light-weight human key points detection algorithm, Light-Weight Network for Real-Time Pose Estimation (LWPE). LWPE uses light-weight backbone network and depthwise separable convolutions to reduce parameters and lower latency. LWPE uses the feature pyramid network (FPN) to fuse the high-resolution, semantically weak features with the low-resolution, semantically strong features. In the meantime, with multi-scale prediction, the predicted result by the low-resolution feature map is stacked to the adjacent higher-resolution feature map to intermediately monitor the network and continuously refine the results. At the last step, the key point coordinates predicted in the highest-resolution are used as the final output of the network. For the key-points that are difficult to predict, LWPE adopts the online hard key points mining strategy to focus on the key points that hard predicting. The proposed algorithm achieves excellent performance in the single-person dataset selected in the AI (artificial intelligence) challenge dataset. The algorithm maintains high-precision performance even though the model only contains 3.9M parameters, and it can run at 225 frames per second (FPS) on the generic graphics processing unit (GPU). <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=depthwise%20separable%20convolutions" title="depthwise separable convolutions">depthwise separable convolutions</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20pyramid%20network" title=" feature pyramid network"> feature pyramid network</a>, <a href="https://publications.waset.org/abstracts/search?q=human%20pose%20estimation" title=" human pose estimation"> human pose estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=light-weight%20backbone" title=" light-weight backbone "> light-weight backbone </a> </p> <a href="https://publications.waset.org/abstracts/112845/light-weight-network-for-real-time-pose-estimation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/112845.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">154</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1206</span> Phasor Measurement Unit Based on Particle Filtering</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rithvik%20Reddy%20Adapa">Rithvik Reddy Adapa</a>, <a href="https://publications.waset.org/abstracts/search?q=Xin%20Wang"> Xin Wang</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Phasor Measurement Units (PMUs) are very sophisticated measuring devices that find amplitude, phase and frequency of various voltages and currents in a power system. Particle filter is a state estimation technique that uses Bayesian inference. Particle filters are widely used in pose estimation and indoor navigation and are very reliable. This paper studies and compares four different particle filters as PMUs namely, generic particle filter (GPF), genetic algorithm particle filter (GAPF), particle swarm optimization particle filter (PSOPF) and adaptive particle filter (APF). Two different test signals are used to test the performance of the filters in terms of responsiveness and correctness of the estimates. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=phasor%20measurement%20unit" title="phasor measurement unit">phasor measurement unit</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20filter" title=" particle filter"> particle filter</a>, <a href="https://publications.waset.org/abstracts/search?q=genetic%20algorithm" title=" genetic algorithm"> genetic algorithm</a>, <a href="https://publications.waset.org/abstracts/search?q=particle%20swarm%20optimisation" title=" particle swarm optimisation"> particle swarm optimisation</a>, <a href="https://publications.waset.org/abstracts/search?q=state%20estimation" title=" state estimation"> state estimation</a> </p> <a href="https://publications.waset.org/abstracts/194127/phasor-measurement-unit-based-on-particle-filtering" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/194127.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">8</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1205</span> Single-Camera Basketball Tracker through Pose and Semantic Feature Fusion</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Adri%C3%A0%20Arbu%C3%A9s-Sang%C3%BCesa">Adrià Arbués-Sangüesa</a>, <a href="https://publications.waset.org/abstracts/search?q=Coloma%20Ballester"> Coloma Ballester</a>, <a href="https://publications.waset.org/abstracts/search?q=Gloria%20Haro"> Gloria Haro</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Tracking sports players is a widely challenging scenario, specially in single-feed videos recorded in tight courts, where cluttering and occlusions cannot be avoided. This paper presents an analysis of several geometric and semantic visual features to detect and track basketball players. An ablation study is carried out and then used to remark that a robust tracker can be built with Deep Learning features, without the need of extracting contextual ones, such as proximity or color similarity, nor applying camera stabilization techniques. The presented tracker consists of: (1) a detection step, which uses a pretrained deep learning model to estimate the players pose, followed by (2) a tracking step, which leverages pose and semantic information from the output of a convolutional layer in a VGG network. Its performance is analyzed in terms of MOTA over a basketball dataset with more than 10k instances. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=basketball" title="basketball">basketball</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20extraction" title=" feature extraction"> feature extraction</a>, <a href="https://publications.waset.org/abstracts/search?q=single-camera" title=" single-camera"> single-camera</a>, <a href="https://publications.waset.org/abstracts/search?q=tracking" title=" tracking"> tracking</a> </p> <a href="https://publications.waset.org/abstracts/109446/single-camera-basketball-tracker-through-pose-and-semantic-feature-fusion" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/109446.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">138</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1204</span> Indoor Real-Time Positioning and Mapping Based on Manhattan Hypothesis Optimization</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Linhang%20Zhu">Linhang Zhu</a>, <a href="https://publications.waset.org/abstracts/search?q=Hongyu%20Zhu"> Hongyu Zhu</a>, <a href="https://publications.waset.org/abstracts/search?q=Jiahe%20Liu"> Jiahe Liu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> This paper investigated a method of indoor real-time positioning and mapping based on the Manhattan world assumption. In indoor environments, relying solely on feature matching techniques or other geometric algorithms for sensor pose estimation inevitably resulted in cumulative errors, posing a significant challenge to indoor positioning. To address this issue, we adopt the Manhattan world hypothesis to optimize the camera pose algorithm based on feature matching, which improves the accuracy of camera pose estimation. A special processing method was applied to image data frames that conformed to the Manhattan world assumption. When similar data frames appeared subsequently, this could be used to eliminate drift in sensor pose estimation, thereby reducing cumulative errors in estimation and optimizing mapping and positioning. Through experimental verification, it is found that our method achieves high-precision real-time positioning in indoor environments and successfully generates maps of indoor environments. This provides effective technical support for applications such as indoor navigation and robot control. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Manhattan%20world%20hypothesis" title="Manhattan world hypothesis">Manhattan world hypothesis</a>, <a href="https://publications.waset.org/abstracts/search?q=real-time%20positioning%20and%20mapping" title=" real-time positioning and mapping"> real-time positioning and mapping</a>, <a href="https://publications.waset.org/abstracts/search?q=feature%20matching" title=" feature matching"> feature matching</a>, <a href="https://publications.waset.org/abstracts/search?q=loopback%20detection" title=" loopback detection"> loopback detection</a> </p> <a href="https://publications.waset.org/abstracts/173745/indoor-real-time-positioning-and-mapping-based-on-manhattan-hypothesis-optimization" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/173745.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">61</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1203</span> 6D Posture Estimation of Road Vehicles from Color Images</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Yoshimoto%20Kurihara">Yoshimoto Kurihara</a>, <a href="https://publications.waset.org/abstracts/search?q=Tad%20Gonsalves"> Tad Gonsalves</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Currently, in the field of object posture estimation, there is research on estimating the position and angle of an object by storing a 3D model of the object to be estimated in advance in a computer and matching it with the model. However, in this research, we have succeeded in creating a module that is much simpler, smaller in scale, and faster in operation. Our 6D pose estimation model consists of two different networks – a classification network and a regression network. From a single RGB image, the trained model estimates the class of the object in the image, the coordinates of the object, and its rotation angle in 3D space. In addition, we compared the estimation accuracy of each camera position, i.e., the angle from which the object was captured. The highest accuracy was recorded when the camera position was 75°, the accuracy of the classification was about 87.3%, and that of regression was about 98.9%. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=6D%20posture%20estimation" title="6D posture estimation">6D posture estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=image%20recognition" title=" image recognition"> image recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=deep%20learning" title=" deep learning"> deep learning</a>, <a href="https://publications.waset.org/abstracts/search?q=AlexNet" title=" AlexNet"> AlexNet</a> </p> <a href="https://publications.waset.org/abstracts/138449/6d-posture-estimation-of-road-vehicles-from-color-images" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/138449.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">155</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1202</span> Cricket Shot Recognition using Conditional Directed Spatial-Temporal Graph Networks</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tanu%20Aneja">Tanu Aneja</a>, <a href="https://publications.waset.org/abstracts/search?q=Harsha%20Malaviya"> Harsha Malaviya</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Capturing pose information in cricket shots poses several challenges, such as low-resolution videos, noisy data, and joint occlusions caused by the nature of the shots. In response to these challenges, we propose a CondDGConv-based framework specifically for cricket shot prediction. By analyzing the spatial-temporal relationships in batsman shot sequences from an annotated 2D cricket dataset, our model achieves a 97% accuracy in predicting shot types. This performance is made possible by conditioning the graph network on batsman 2D poses, allowing for precise prediction of shot outcomes based on pose dynamics. Our approach highlights the potential for enhancing shot prediction in cricket analytics, offering a robust solution for overcoming pose-related challenges in sports analysis. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=action%20recognition" title="action recognition">action recognition</a>, <a href="https://publications.waset.org/abstracts/search?q=cricket.%20sports%20video%20analytics" title=" cricket. sports video analytics"> cricket. sports video analytics</a>, <a href="https://publications.waset.org/abstracts/search?q=computer%20vision" title=" computer vision"> computer vision</a>, <a href="https://publications.waset.org/abstracts/search?q=graph%20convolutional%20networks" title=" graph convolutional networks"> graph convolutional networks</a> </p> <a href="https://publications.waset.org/abstracts/192975/cricket-shot-recognition-using-conditional-directed-spatial-temporal-graph-networks" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/192975.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">18</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1201</span> Estimation of Coefficients of Ridge and Principal Components Regressions with Multicollinear Data </h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Rajeshwar%20Singh">Rajeshwar Singh</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The presence of multicollinearity is common in handling with several explanatory variables simultaneously due to exhibiting a linear relationship among them. A great problem arises in understanding the impact of explanatory variables on the dependent variable. Thus, the method of least squares estimation gives inexact estimates. In this case, it is advised to detect its presence first before proceeding further. Using the ridge regression degree of its occurrence is reduced but principal components regression gives good estimates in this situation. This paper discusses well-known techniques of the ridge and principal components regressions and applies to get the estimates of coefficients by both techniques. In addition to it, this paper also discusses the conflicting claim on the discovery of the method of ridge regression based on available documents. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=conflicting%20claim%20on%20credit%20of%20discovery%20of%20ridge%20regression" title="conflicting claim on credit of discovery of ridge regression">conflicting claim on credit of discovery of ridge regression</a>, <a href="https://publications.waset.org/abstracts/search?q=multicollinearity" title=" multicollinearity"> multicollinearity</a>, <a href="https://publications.waset.org/abstracts/search?q=principal%20components%20and%20ridge%20regressions" title=" principal components and ridge regressions"> principal components and ridge regressions</a>, <a href="https://publications.waset.org/abstracts/search?q=variance%20inflation%20factor" title=" variance inflation factor"> variance inflation factor</a> </p> <a href="https://publications.waset.org/abstracts/31600/estimation-of-coefficients-of-ridge-and-principal-components-regressions-with-multicollinear-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/31600.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">419</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1200</span> Polynomially Adjusted Bivariate Density Estimates Based on the Saddlepoint Approximation</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=S.%20B.%20Provost">S. B. Provost</a>, <a href="https://publications.waset.org/abstracts/search?q=Susan%20Sheng"> Susan Sheng</a> </p> <p class="card-text"><strong>Abstract:</strong></p> An alternative bivariate density estimation methodology is introduced in this presentation. The proposed approach involves estimating the density function associated with the marginal distribution of each of the two variables by means of the saddlepoint approximation technique and applying a bivariate polynomial adjustment to the product of these density estimates. Since the saddlepoint approximation is utilized in the context of density estimation, such estimates are determined from empirical cumulant-generating functions. In the univariate case, the saddlepoint density estimate is itself adjusted by a polynomial. Given a set of observations, the coefficients of the polynomial adjustments are obtained from the sample moments. Several illustrative applications of the proposed methodology shall be presented. Since this approach relies essentially on a determinate number of sample moments, it is particularly well suited for modeling massive data sets. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=density%20estimation" title="density estimation">density estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=empirical%20cumulant-generating%20function" title=" empirical cumulant-generating function"> empirical cumulant-generating function</a>, <a href="https://publications.waset.org/abstracts/search?q=moments" title=" moments"> moments</a>, <a href="https://publications.waset.org/abstracts/search?q=saddlepoint%20approximation" title=" saddlepoint approximation"> saddlepoint approximation</a> </p> <a href="https://publications.waset.org/abstracts/72664/polynomially-adjusted-bivariate-density-estimates-based-on-the-saddlepoint-approximation" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/72664.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">280</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1199</span> Evaluation of the Impact of Information and Communications Technology (ICT) on the Accuracy of Preliminary Cost Estimates of Building Projects in Nigeria</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Nofiu%20A.%20Musa">Nofiu A. Musa</a>, <a href="https://publications.waset.org/abstracts/search?q=Olubola%20Babalola"> Olubola Babalola</a> </p> <p class="card-text"><strong>Abstract:</strong></p> The study explored the effect of ICT on the accuracy of Preliminary Cost Estimates (PCEs) prepared by quantity surveying consulting firms in Nigeria for building projects, with a view to determining the desirability of the adoption and use of the technological innovation for preliminary estimating. Thus, data pertinent to the study were obtained through questionnaire survey conducted on a sample of one hundred and eight (108) quantity surveying firms selected from the list of registered firms compiled by the Nigerian Institute of Quantity Surveyors (NIQS), Lagos State Chapter through systematic random sampling. The data obtained were analyzed with SPSS version 17 using student’s t-tests at 5% significance level. The results obtained revealed that the mean bias and co-efficient of variation of the PCEs of the firms are significantly less at post ICT adoption period than the pre ICT adoption period, F < 0.05 in each case. The paper concluded that the adoption and use of the Technological Innovation (ICT) has significantly improved the accuracy of the Preliminary Cost Estimates (PCEs) of building projects, hence, it is desirable. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=accepted%20tender%20price" title="accepted tender price">accepted tender price</a>, <a href="https://publications.waset.org/abstracts/search?q=accuracy" title=" accuracy"> accuracy</a>, <a href="https://publications.waset.org/abstracts/search?q=bias" title=" bias"> bias</a>, <a href="https://publications.waset.org/abstracts/search?q=building%20projects" title=" building projects"> building projects</a>, <a href="https://publications.waset.org/abstracts/search?q=consistency" title=" consistency"> consistency</a>, <a href="https://publications.waset.org/abstracts/search?q=information%20and%20communications%20technology" title=" information and communications technology"> information and communications technology</a>, <a href="https://publications.waset.org/abstracts/search?q=preliminary%20cost%20estimates" title=" preliminary cost estimates"> preliminary cost estimates</a> </p> <a href="https://publications.waset.org/abstracts/4409/evaluation-of-the-impact-of-information-and-communications-technology-ict-on-the-accuracy-of-preliminary-cost-estimates-of-building-projects-in-nigeria" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/4409.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">428</span> </span> </div> </div> <div class="card paper-listing mb-3 mt-3"> <h5 class="card-header" style="font-size:.9rem"><span class="badge badge-info">1198</span> A Semiparametric Approach to Estimate the Mode of Continuous Multivariate Data</h5> <div class="card-body"> <p class="card-text"><strong>Authors:</strong> <a href="https://publications.waset.org/abstracts/search?q=Tiee-Jian%20Wu">Tiee-Jian Wu</a>, <a href="https://publications.waset.org/abstracts/search?q=Chih-Yuan%20Hsu"> Chih-Yuan Hsu</a> </p> <p class="card-text"><strong>Abstract:</strong></p> Mode estimation is an important task, because it has applications to data from a wide variety of sources. We propose a semi-parametric approach to estimate the mode of an unknown continuous multivariate density function. Our approach is based on a weighted average of a parametric density estimate using the Box-Cox transform and a non-parametric kernel density estimate. Our semi-parametric mode estimate improves both the parametric- and non-parametric- mode estimates. Specifically, our mode estimate solves the non-consistency problem of parametric mode estimates (at large sample sizes) and reduces the variability of non-parametric mode estimates (at small sample sizes). The performance of our method at practical sample sizes is demonstrated by simulation examples and two real examples from the fields of climatology and image recognition. <p class="card-text"><strong>Keywords:</strong> <a href="https://publications.waset.org/abstracts/search?q=Box-Cox%20transform" title="Box-Cox transform">Box-Cox transform</a>, <a href="https://publications.waset.org/abstracts/search?q=density%20estimation" title=" density estimation"> density estimation</a>, <a href="https://publications.waset.org/abstracts/search?q=mode%20seeking" title=" mode seeking"> mode seeking</a>, <a href="https://publications.waset.org/abstracts/search?q=semiparametric%20method" title=" semiparametric method"> semiparametric method</a> </p> <a href="https://publications.waset.org/abstracts/53756/a-semiparametric-approach-to-estimate-the-mode-of-continuous-multivariate-data" class="btn btn-primary btn-sm">Procedia</a> <a href="https://publications.waset.org/abstracts/53756.pdf" target="_blank" class="btn btn-primary btn-sm">PDF</a> <span class="bg-info text-light px-1 py-1 float-right rounded"> Downloads <span class="badge badge-light">285</span> </span> </div> </div> <ul class="pagination"> <li class="page-item disabled"><span class="page-link">‹</span></li> <li class="page-item active"><span class="page-link">1</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20estimates&page=2">2</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20estimates&page=3">3</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20estimates&page=4">4</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20estimates&page=5">5</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20estimates&page=6">6</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20estimates&page=7">7</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20estimates&page=8">8</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20estimates&page=9">9</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20estimates&page=10">10</a></li> <li class="page-item disabled"><span class="page-link">...</span></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20estimates&page=40">40</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20estimates&page=41">41</a></li> <li class="page-item"><a class="page-link" href="https://publications.waset.org/abstracts/search?q=pose%20estimates&page=2" rel="next">›</a></li> </ul> </div> </main> <footer> <div id="infolinks" class="pt-3 pb-2"> <div class="container"> <div style="background-color:#f5f5f5;" class="p-3"> <div class="row"> <div class="col-md-2"> <ul class="list-unstyled"> About <li><a href="https://waset.org/page/support">About Us</a></li> <li><a href="https://waset.org/page/support#legal-information">Legal</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/WASET-16th-foundational-anniversary.pdf">WASET celebrates its 16th foundational anniversary</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Account <li><a href="https://waset.org/profile">My Account</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Explore <li><a href="https://waset.org/disciplines">Disciplines</a></li> <li><a href="https://waset.org/conferences">Conferences</a></li> <li><a href="https://waset.org/conference-programs">Conference Program</a></li> <li><a href="https://waset.org/committees">Committees</a></li> <li><a href="https://publications.waset.org">Publications</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Research <li><a href="https://publications.waset.org/abstracts">Abstracts</a></li> <li><a href="https://publications.waset.org">Periodicals</a></li> <li><a href="https://publications.waset.org/archive">Archive</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Open Science <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Philosophy.pdf">Open Science Philosophy</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Science-Award.pdf">Open Science Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Open-Society-Open-Science-and-Open-Innovation.pdf">Open Innovation</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Postdoctoral-Fellowship-Award.pdf">Postdoctoral Fellowship Award</a></li> <li><a target="_blank" rel="nofollow" href="https://publications.waset.org/static/files/Scholarly-Research-Review.pdf">Scholarly Research Review</a></li> </ul> </div> <div class="col-md-2"> <ul class="list-unstyled"> Support <li><a href="https://waset.org/page/support">Support</a></li> <li><a href="https://waset.org/profile/messages/create">Contact Us</a></li> <li><a href="https://waset.org/profile/messages/create">Report Abuse</a></li> </ul> </div> </div> </div> </div> </div> <div class="container text-center"> <hr style="margin-top:0;margin-bottom:.3rem;"> <a href="https://creativecommons.org/licenses/by/4.0/" target="_blank" class="text-muted small">Creative Commons Attribution 4.0 International License</a> <div id="copy" class="mt-2">© 2024 World Academy of Science, Engineering and Technology</div> </div> </footer> <a href="javascript:" id="return-to-top"><i class="fas fa-arrow-up"></i></a> <div class="modal" id="modal-template"> <div class="modal-dialog"> <div class="modal-content"> <div class="row m-0 mt-1"> <div class="col-md-12"> <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> </div> </div> <div class="modal-body"></div> </div> </div> </div> <script src="https://cdn.waset.org/static/plugins/jquery-3.3.1.min.js"></script> <script src="https://cdn.waset.org/static/plugins/bootstrap-4.2.1/js/bootstrap.bundle.min.js"></script> <script src="https://cdn.waset.org/static/js/site.js?v=150220211556"></script> <script> jQuery(document).ready(function() { /*jQuery.get("https://publications.waset.org/xhr/user-menu", function (response) { jQuery('#mainNavMenu').append(response); });*/ jQuery.get({ url: "https://publications.waset.org/xhr/user-menu", cache: false }).then(function(response){ jQuery('#mainNavMenu').append(response); }); }); </script> </body> </html>