CINXE.COM
(PDF) Non-linear statistical models for the 3D reconstruction of human pose and motion from monocular image sequences | Richard Bowden - Academia.edu
<!DOCTYPE html> <html > <head> <meta charset="utf-8"> <meta rel="search" type="application/opensearchdescription+xml" href="/open_search.xml" title="Academia.edu"> <meta content="width=device-width, initial-scale=1" name="viewport"> <meta name="google-site-verification" content="bKJMBZA7E43xhDOopFZkssMMkBRjvYERV-NaN4R6mrs"> <meta name="csrf-param" content="authenticity_token" /> <meta name="csrf-token" content="ovxYvz6jrMWxtNmwy2V/v/DNRiC/Sh47z3j2pP3cti/cCVDjFa4AEbom8uxv2mE3lAGSNiskxhRDDKYtLeTTBQ==" /> <meta name="citation_title" content="Non-linear statistical models for the 3D reconstruction of human pose and motion from monocular image sequences" /> <meta name="citation_publication_date" content="2000/01/01" /> <meta name="citation_journal_title" content="Image and Vision Computing" /> <meta name="citation_author" content="Richard Bowden" /> <meta name="twitter:card" content="summary" /> <meta name="twitter:url" content="https://www.academia.edu/575713/Non_linear_statistical_models_for_the_3D_reconstruction_of_human_pose_and_motion_from_monocular_image_sequences" /> <meta name="twitter:title" content="Non-linear statistical models for the 3D reconstruction of human pose and motion from monocular image sequences" /> <meta name="twitter:description" content="Academia.edu is a platform for academics to share research papers." /> <meta name="twitter:image" content="http://a.academia-assets.com/images/twitter-card.jpeg" /> <meta property="fb:app_id" content="2369844204" /> <meta property="og:type" content="article" /> <meta property="og:url" content="https://www.academia.edu/575713/Non_linear_statistical_models_for_the_3D_reconstruction_of_human_pose_and_motion_from_monocular_image_sequences" /> <meta property="og:title" content="Non-linear statistical models for the 3D reconstruction of human pose and motion from monocular image sequences" /> <meta property="og:image" content="http://a.academia-assets.com/images/open-graph-icons/fb-paper.gif" /> <meta property="og:description" content="Non-linear statistical models for the 3D reconstruction of human pose and motion from monocular image sequences" /> <meta property="article:author" content="https://surrey.academia.edu/RichardBowden" /> <meta name="description" content="Non-linear statistical models for the 3D reconstruction of human pose and motion from monocular image sequences" /> <title>(PDF) Non-linear statistical models for the 3D reconstruction of human pose and motion from monocular image sequences | Richard Bowden - Academia.edu</title> <link rel="canonical" href="https://www.academia.edu/575713/Non_linear_statistical_models_for_the_3D_reconstruction_of_human_pose_and_motion_from_monocular_image_sequences" /> <script async src="https://www.googletagmanager.com/gtag/js?id=G-5VKX33P2DS"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-5VKX33P2DS', { cookie_domain: 'academia.edu', send_page_view: false, }); gtag('event', 'page_view', { 'controller': "single_work", 'action': "show", 'controller_action': 'single_work#show', 'logged_in': 'false', 'edge': 'unknown', // Send nil if there is no A/B test bucket, in case some records get logged // with missing data - that way we can distinguish between the two cases. // ab_test_bucket should be of the form <ab_test_name>:<bucket> 'ab_test_bucket': null, }) </script> <script> var $controller_name = 'single_work'; var $action_name = "show"; var $rails_env = 'production'; var $app_rev = '39314d9bcf4522f48eeb027cf31da0a13496d2ce'; var $domain = 'academia.edu'; var $app_host = "academia.edu"; var $asset_host = "academia-assets.com"; var $start_time = new Date().getTime(); var $recaptcha_key = "6LdxlRMTAAAAADnu_zyLhLg0YF9uACwz78shpjJB"; var $recaptcha_invisible_key = "6Lf3KHUUAAAAACggoMpmGJdQDtiyrjVlvGJ6BbAj"; var $disableClientRecordHit = false; </script> <script> window.require = { config: function() { return function() {} } } </script> <script> window.Aedu = window.Aedu || {}; window.Aedu.hit_data = null; window.Aedu.serverRenderTime = new Date(1733920120000); window.Aedu.timeDifference = new Date().getTime() - 1733920120000; </script> <script type="application/ld+json">{"@context":"https://schema.org","@type":"ScholarlyArticle","author":[{"@context":"https://schema.org","@type":"Person","name":"Richard Bowden"}],"contributor":[],"dateCreated":"2011-05-09","dateModified":"2024-11-30","datePublished":"2000-01-01","headline":"Non-linear statistical models for the 3D reconstruction of human pose and motion from monocular image sequences","image":"https://attachments.academia-assets.com/2966014/thumbnails/1.jpg","inLanguage":"en","keywords":["Human Body","3-D Reconstruction","Statistical Model","Model Based Approach","Image Sequence","Electrical And Electronic Engineering"],"publication":"Image and Vision Computing","publisher":{"@context":"https://schema.org","@type":"Organization","name":"Elsevier"},"sourceOrganization":[{"@context":"https://schema.org","@type":"EducationalOrganization","name":"surrey"}],"thumbnailUrl":"https://attachments.academia-assets.com/2966014/thumbnails/1.jpg","url":"https://www.academia.edu/575713/Non_linear_statistical_models_for_the_3D_reconstruction_of_human_pose_and_motion_from_monocular_image_sequences"}</script><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/single_work_page/loswp-102fa537001ba4d8dcd921ad9bd56c474abc201906ea4843e7e7efe9dfbf561d.css" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/body-8d679e925718b5e8e4b18e9a4fab37f7eaa99e43386459376559080ac8f2856a.css" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/button-3cea6e0ad4715ed965c49bfb15dedfc632787b32ff6d8c3a474182b231146ab7.css" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/text_button-73590134e40cdb49f9abdc8e796cc00dc362693f3f0f6137d6cf9bb78c318ce7.css" /><link crossorigin="" href="https://fonts.gstatic.com/" rel="preconnect" /><link href="https://fonts.googleapis.com/css2?family=DM+Sans:ital,opsz,wght@0,9..40,100..1000;1,9..40,100..1000&family=Gupter:wght@400;500;700&family=IBM+Plex+Mono:wght@300;400&family=Material+Symbols+Outlined:opsz,wght,FILL,GRAD@20,400,0,0&display=swap" rel="stylesheet" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/common-10fa40af19d25203774df2d4a03b9b5771b45109c2304968038e88a81d1215c5.css" /> </head> <body> <div id='react-modal'></div> <div class="js-upgrade-ie-banner" style="display: none; text-align: center; padding: 8px 0; background-color: #ebe480;"><p style="color: #000; font-size: 12px; margin: 0 0 4px;">Academia.edu no longer supports Internet Explorer.</p><p style="color: #000; font-size: 12px; margin: 0;">To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to <a href="https://www.academia.edu/upgrade-browser">upgrade your browser</a>.</p></div><script>// Show this banner for all versions of IE if (!!window.MSInputMethodContext || /(MSIE)/.test(navigator.userAgent)) { document.querySelector('.js-upgrade-ie-banner').style.display = 'block'; }</script> <div class="bootstrap login"><div class="modal fade login-modal" id="login-modal"><div class="login-modal-dialog modal-dialog"><div class="modal-content"><div class="modal-header"><button class="close close" data-dismiss="modal" type="button"><span aria-hidden="true">×</span><span class="sr-only">Close</span></button><h4 class="modal-title text-center"><strong>Log In</strong></h4></div><div class="modal-body"><div class="row"><div class="col-xs-10 col-xs-offset-1"><button class="btn btn-fb btn-lg btn-block btn-v-center-content" id="login-facebook-oauth-button"><svg style="float: left; width: 19px; line-height: 1em; margin-right: .3em;" aria-hidden="true" focusable="false" data-prefix="fab" data-icon="facebook-square" class="svg-inline--fa fa-facebook-square fa-w-14" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 448 512"><path fill="currentColor" d="M400 32H48A48 48 0 0 0 0 80v352a48 48 0 0 0 48 48h137.25V327.69h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.27c-30.81 0-40.42 19.12-40.42 38.73V256h68.78l-11 71.69h-57.78V480H400a48 48 0 0 0 48-48V80a48 48 0 0 0-48-48z"></path></svg><small><strong>Log in</strong> with <strong>Facebook</strong></small></button><br /><button class="btn btn-google btn-lg btn-block btn-v-center-content" id="login-google-oauth-button"><svg style="float: left; width: 22px; line-height: 1em; margin-right: .3em;" aria-hidden="true" focusable="false" data-prefix="fab" data-icon="google-plus" class="svg-inline--fa fa-google-plus fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M256,8C119.1,8,8,119.1,8,256S119.1,504,256,504,504,392.9,504,256,392.9,8,256,8ZM185.3,380a124,124,0,0,1,0-248c31.3,0,60.1,11,83,32.3l-33.6,32.6c-13.2-12.9-31.3-19.1-49.4-19.1-42.9,0-77.2,35.5-77.2,78.1S142.3,334,185.3,334c32.6,0,64.9-19.1,70.1-53.3H185.3V238.1H302.2a109.2,109.2,0,0,1,1.9,20.7c0,70.8-47.5,121.2-118.8,121.2ZM415.5,273.8v35.5H380V273.8H344.5V238.3H380V202.8h35.5v35.5h35.2v35.5Z"></path></svg><small><strong>Log in</strong> with <strong>Google</strong></small></button><br /><style type="text/css">.sign-in-with-apple-button { width: 100%; height: 52px; border-radius: 3px; border: 1px solid black; cursor: pointer; }</style><script src="https://appleid.cdn-apple.com/appleauth/static/jsapi/appleid/1/en_US/appleid.auth.js" type="text/javascript"></script><div class="sign-in-with-apple-button" data-border="false" data-color="white" id="appleid-signin"><span ="Sign Up with Apple" class="u-fs11"></span></div><script>AppleID.auth.init({ clientId: 'edu.academia.applesignon', scope: 'name email', redirectURI: 'https://www.academia.edu/sessions', state: "faa6d723441dd84d3890e18d69c9609f44697c5e125cd02ec3a58200a19de7f4", });</script><script>// Hacky way of checking if on fast loswp if (window.loswp == null) { (function() { const Google = window?.Aedu?.Auth?.OauthButton?.Login?.Google; const Facebook = window?.Aedu?.Auth?.OauthButton?.Login?.Facebook; if (Google) { new Google({ el: '#login-google-oauth-button', rememberMeCheckboxId: 'remember_me', track: null }); } if (Facebook) { new Facebook({ el: '#login-facebook-oauth-button', rememberMeCheckboxId: 'remember_me', track: null }); } })(); }</script></div></div></div><div class="modal-body"><div class="row"><div class="col-xs-10 col-xs-offset-1"><div class="hr-heading login-hr-heading"><span class="hr-heading-text">or</span></div></div></div></div><div class="modal-body"><div class="row"><div class="col-xs-10 col-xs-offset-1"><form class="js-login-form" action="https://www.academia.edu/sessions" accept-charset="UTF-8" method="post"><input name="utf8" type="hidden" value="✓" autocomplete="off" /><input type="hidden" name="authenticity_token" value="zX+L4c8Wl9rcJKlEDOKMclng0xoI9xRO+X+0l6QshC2zioO95Bs7Dte2ghioXZL6PSwHDJyZzGF1C+QedBThBw==" autocomplete="off" /><div class="form-group"><label class="control-label" for="login-modal-email-input" style="font-size: 14px;">Email</label><input class="form-control" id="login-modal-email-input" name="login" type="email" /></div><div class="form-group"><label class="control-label" for="login-modal-password-input" style="font-size: 14px;">Password</label><input class="form-control" id="login-modal-password-input" name="password" type="password" /></div><input type="hidden" name="post_login_redirect_url" id="post_login_redirect_url" value="https://www.academia.edu/575713/Non_linear_statistical_models_for_the_3D_reconstruction_of_human_pose_and_motion_from_monocular_image_sequences" autocomplete="off" /><div class="checkbox"><label><input type="checkbox" name="remember_me" id="remember_me" value="1" checked="checked" /><small style="font-size: 12px; margin-top: 2px; display: inline-block;">Remember me on this computer</small></label></div><br><input type="submit" name="commit" value="Log In" class="btn btn-primary btn-block btn-lg js-login-submit" data-disable-with="Log In" /></br></form><script>typeof window?.Aedu?.recaptchaManagedForm === 'function' && window.Aedu.recaptchaManagedForm( document.querySelector('.js-login-form'), document.querySelector('.js-login-submit') );</script><small style="font-size: 12px;"><br />or <a data-target="#login-modal-reset-password-container" data-toggle="collapse" href="javascript:void(0)">reset password</a></small><div class="collapse" id="login-modal-reset-password-container"><br /><div class="well margin-0x"><form class="js-password-reset-form" action="https://www.academia.edu/reset_password" accept-charset="UTF-8" method="post"><input name="utf8" type="hidden" value="✓" autocomplete="off" /><input type="hidden" name="authenticity_token" value="YOi19lmvL0Vrl9eDRCtdyssZc5/v5fuVFzj+093WJioeHb2qcqKDkWAF/N/glENCr9WniXuLI7qbTK5aDe5DAA==" autocomplete="off" /><p>Enter the email address you signed up with and we'll email you a reset link.</p><div class="form-group"><input class="form-control" name="email" type="email" /></div><input class="btn btn-primary btn-block g-recaptcha js-password-reset-submit" data-sitekey="6Lf3KHUUAAAAACggoMpmGJdQDtiyrjVlvGJ6BbAj" type="submit" value="Email me a link" /></form></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/collapse-45805421cf446ca5adf7aaa1935b08a3a8d1d9a6cc5d91a62a2a3a00b20b3e6a.js"], function() { // from javascript_helper.rb $("#login-modal-reset-password-container").on("shown.bs.collapse", function() { $(this).find("input[type=email]").focus(); }); }); </script> </div></div></div><div class="modal-footer"><div class="text-center"><small style="font-size: 12px;">Need an account? <a rel="nofollow" href="https://www.academia.edu/signup">Click here to sign up</a></small></div></div></div></div></div></div><script>// If we are on subdomain or non-bootstrapped page, redirect to login page instead of showing modal (function(){ if (typeof $ === 'undefined') return; var host = window.location.hostname; if ((host === $domain || host === "www."+$domain) && (typeof $().modal === 'function')) { $("#nav_log_in").click(function(e) { // Don't follow the link and open the modal e.preventDefault(); $("#login-modal").on('shown.bs.modal', function() { $(this).find("#login-modal-email-input").focus() }).modal('show'); }); } })()</script> <div id="fb-root"></div><script>window.fbAsyncInit = function() { FB.init({ appId: "2369844204", version: "v8.0", status: true, cookie: true, xfbml: true }); // Additional initialization code. if (window.InitFacebook) { // facebook.ts already loaded, set it up. window.InitFacebook(); } else { // Set a flag for facebook.ts to find when it loads. window.academiaAuthReadyFacebook = true; } };</script> <div id="google-root"></div><script>window.loadGoogle = function() { if (window.InitGoogle) { // google.ts already loaded, set it up. window.InitGoogle("331998490334-rsn3chp12mbkiqhl6e7lu2q0mlbu0f1b"); } else { // Set a flag for google.ts to use when it loads. window.GoogleClientID = "331998490334-rsn3chp12mbkiqhl6e7lu2q0mlbu0f1b"; } };</script> <div class="header--container" id="main-header-container"><div class="header--inner-container header--inner-container-ds2"><div class="header-ds2--left-wrapper"><div class="header-ds2--left-wrapper-inner"><a data-main-header-link-target="logo_home" href="https://www.academia.edu/"><img class="hide-on-desktop-redesign" style="height: 24px; width: 24px;" alt="Academia.edu" src="//a.academia-assets.com/images/academia-logo-redesign-2015-A.svg" width="24" height="24" /><img width="145.2" height="18" class="hide-on-mobile-redesign" style="height: 24px;" alt="Academia.edu" src="//a.academia-assets.com/images/academia-logo-redesign-2015.svg" /></a><div class="header--search-container header--search-container-ds2"><form class="js-SiteSearch-form select2-no-default-pills" action="https://www.academia.edu/search" accept-charset="UTF-8" method="get"><input name="utf8" type="hidden" value="✓" autocomplete="off" /><svg style="width: 14px; height: 14px;" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="search" class="header--search-icon svg-inline--fa fa-search fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M505 442.7L405.3 343c-4.5-4.5-10.6-7-17-7H372c27.6-35.3 44-79.7 44-128C416 93.1 322.9 0 208 0S0 93.1 0 208s93.1 208 208 208c48.3 0 92.7-16.4 128-44v16.3c0 6.4 2.5 12.5 7 17l99.7 99.7c9.4 9.4 24.6 9.4 33.9 0l28.3-28.3c9.4-9.4 9.4-24.6.1-34zM208 336c-70.7 0-128-57.2-128-128 0-70.7 57.2-128 128-128 70.7 0 128 57.2 128 128 0 70.7-57.2 128-128 128z"></path></svg><input class="header--search-input header--search-input-ds2 js-SiteSearch-form-input" data-main-header-click-target="search_input" name="q" placeholder="Search" type="text" /></form></div></div></div><nav class="header--nav-buttons header--nav-buttons-ds2 js-main-nav"><a class="ds2-5-button ds2-5-button--secondary js-header-login-url header-button-ds2 header-login-ds2 hide-on-mobile-redesign" href="https://www.academia.edu/login" rel="nofollow">Log In</a><a class="ds2-5-button ds2-5-button--secondary header-button-ds2 hide-on-mobile-redesign" href="https://www.academia.edu/signup" rel="nofollow">Sign Up</a><button class="header--hamburger-button header--hamburger-button-ds2 hide-on-desktop-redesign js-header-hamburger-button"><div class="icon-bar"></div><div class="icon-bar" style="margin-top: 4px;"></div><div class="icon-bar" style="margin-top: 4px;"></div></button></nav></div><ul class="header--dropdown-container js-header-dropdown"><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/login" rel="nofollow">Log In</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/signup" rel="nofollow">Sign Up</a></li><li class="header--dropdown-row js-header-dropdown-expand-button"><button class="header--dropdown-button">more<svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="caret-down" class="header--dropdown-button-icon svg-inline--fa fa-caret-down fa-w-10" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 320 512"><path fill="currentColor" d="M31.3 192h257.3c17.8 0 26.7 21.5 14.1 34.1L174.1 354.8c-7.8 7.8-20.5 7.8-28.3 0L17.2 226.1C4.6 213.5 13.5 192 31.3 192z"></path></svg></button></li><li><ul class="header--expanded-dropdown-container"><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/about">About</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/press">Press</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/documents">Papers</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/terms">Terms</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/privacy">Privacy</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/copyright">Copyright</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/hiring"><svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="briefcase" class="header--dropdown-row-icon svg-inline--fa fa-briefcase fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M320 336c0 8.84-7.16 16-16 16h-96c-8.84 0-16-7.16-16-16v-48H0v144c0 25.6 22.4 48 48 48h416c25.6 0 48-22.4 48-48V288H320v48zm144-208h-80V80c0-25.6-22.4-48-48-48H176c-25.6 0-48 22.4-48 48v48H48c-25.6 0-48 22.4-48 48v80h512v-80c0-25.6-22.4-48-48-48zm-144 0H192V96h128v32z"></path></svg>We're Hiring!</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://support.academia.edu/"><svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="question-circle" class="header--dropdown-row-icon svg-inline--fa fa-question-circle fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zM262.655 90c-54.497 0-89.255 22.957-116.549 63.758-3.536 5.286-2.353 12.415 2.715 16.258l34.699 26.31c5.205 3.947 12.621 3.008 16.665-2.122 17.864-22.658 30.113-35.797 57.303-35.797 20.429 0 45.698 13.148 45.698 32.958 0 14.976-12.363 22.667-32.534 33.976C247.128 238.528 216 254.941 216 296v4c0 6.627 5.373 12 12 12h56c6.627 0 12-5.373 12-12v-1.333c0-28.462 83.186-29.647 83.186-106.667 0-58.002-60.165-102-116.531-102zM256 338c-25.365 0-46 20.635-46 46 0 25.364 20.635 46 46 46s46-20.636 46-46c0-25.365-20.635-46-46-46z"></path></svg>Help Center</a></li><li class="header--dropdown-row js-header-dropdown-collapse-button"><button class="header--dropdown-button">less<svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="caret-up" class="header--dropdown-button-icon svg-inline--fa fa-caret-up fa-w-10" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 320 512"><path fill="currentColor" d="M288.662 352H31.338c-17.818 0-26.741-21.543-14.142-34.142l128.662-128.662c7.81-7.81 20.474-7.81 28.284 0l128.662 128.662c12.6 12.599 3.676 34.142-14.142 34.142z"></path></svg></button></li></ul></li></ul></div> <script src="//a.academia-assets.com/assets/webpack_bundles/fast_loswp-bundle-85e1f6e5aefed21344b25292b138fad229d4259fa0bd3341a6b90fec2f5f6977.js" defer="defer"></script><script>window.loswp = {}; window.loswp.author = 5692; window.loswp.bulkDownloadFilterCounts = {}; window.loswp.hasDownloadableAttachment = true; window.loswp.hasViewableAttachments = true; // TODO: just use routes for this window.loswp.loginUrl = "https://www.academia.edu/login?post_login_redirect_url=https%3A%2F%2Fwww.academia.edu%2F575713%2FNon_linear_statistical_models_for_the_3D_reconstruction_of_human_pose_and_motion_from_monocular_image_sequences%3Fauto%3Ddownload"; window.loswp.translateUrl = "https://www.academia.edu/login?post_login_redirect_url=https%3A%2F%2Fwww.academia.edu%2F575713%2FNon_linear_statistical_models_for_the_3D_reconstruction_of_human_pose_and_motion_from_monocular_image_sequences%3Fshow_translation%3Dtrue"; window.loswp.previewableAttachments = [{"id":2966014,"identifier":"Attachment_2966014","shouldShowBulkDownload":false}]; window.loswp.shouldDetectTimezone = true; window.loswp.shouldShowBulkDownload = true; window.loswp.showSignupCaptcha = false window.loswp.willEdgeCache = false; window.loswp.work = {"work":{"id":575713,"created_at":"2011-05-09T04:25:45.548-07:00","from_world_paper_id":4490732,"updated_at":"2024-11-30T01:34:36.623-08:00","_data":{"publisher":"Elsevier","ai_title_tag":"3D Human Pose Reconstruction from Monocular Images Using Non-linear Models","publication_date":"2000,1,1","publication_name":"Image and Vision Computing"},"document_type":"paper","pre_hit_view_count_baseline":0,"quality":"high","language":"en","title":"Non-linear statistical models for the 3D reconstruction of human pose and motion from monocular image sequences","broadcastable":false,"draft":null,"has_indexable_attachment":true,"indexable":true}}["work"]; window.loswp.workCoauthors = [5692]; window.loswp.locale = "en"; window.loswp.countryCode = "SG"; window.loswp.cwvAbTestBucket = ""; window.loswp.designVariant = "ds_vanilla"; window.loswp.fullPageMobileSutdModalVariant = "control"; window.loswp.useOptimizedScribd4genScript = false; window.loswp.appleClientId = 'edu.academia.applesignon';</script><script defer="" src="https://accounts.google.com/gsi/client"></script><div class="ds-loswp-container"><div class="ds-work-card--grid-container"><div class="ds-work-card--container js-loswp-work-card"><div class="ds-work-card--cover"><div class="ds-work-cover--wrapper"><div class="ds-work-cover--container"><button class="ds-work-cover--clickable js-swp-download-button" data-signup-modal="{"location":"swp-splash-paper-cover","attachmentId":2966014,"attachmentType":"pdf"}"><img alt="First page of “Non-linear statistical models for the 3D reconstruction of human pose and motion from monocular image sequences”" class="ds-work-cover--cover-thumbnail" src="https://0.academia-photos.com/attachment_thumbnails/2966014/mini_magick20190427-15800-175jmdl.png?1556385688" /><img alt="PDF Icon" class="ds-work-cover--file-icon" src="//a.academia-assets.com/images/single_work_splash/adobe_icon.svg" /><div class="ds-work-cover--hover-container"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">download</span><p>Download Free PDF</p></div><div class="ds-work-cover--ribbon-container">Download Free PDF</div><div class="ds-work-cover--ribbon-triangle"></div></button></div></div></div><div class="ds-work-card--work-information"><h1 class="ds-work-card--work-title">Non-linear statistical models for the 3D reconstruction of human pose and motion from monocular image sequences</h1><div class="ds-work-card--work-authors ds-work-card--detail"><a class="ds-work-card--author js-wsj-grid-card-author ds2-5-body-md ds2-5-body-link" data-author-id="5692" href="https://surrey.academia.edu/RichardBowden"><img alt="Profile image of Richard Bowden" class="ds-work-card--author-avatar" src="//a.academia-assets.com/images/s65_no_pic.png" />Richard Bowden</a></div><div class="ds-work-card--detail"><p class="ds-work-card--detail ds2-5-body-sm">2000, Image and Vision Computing</p><div class="ds-work-card--work-metadata"><div class="ds-work-card--work-metadata__stat"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">visibility</span><p class="ds2-5-body-sm" id="work-metadata-view-count">…</p></div><div class="ds-work-card--work-metadata__stat"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">description</span><p class="ds2-5-body-sm">9 pages</p></div><div class="ds-work-card--work-metadata__stat"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">link</span><p class="ds2-5-body-sm">1 file</p></div></div><script>(async () => { const workId = 575713; const worksViewsPath = "/v0/works/views?subdomain_param=api&work_ids%5B%5D=575713"; const getWorkViews = async (workId) => { const response = await fetch(worksViewsPath); if (!response.ok) { throw new Error('Failed to load work views'); } const data = await response.json(); return data.views[workId]; }; // Get the view count for the work - we send this immediately rather than waiting for // the DOM to load, so it can be available as soon as possible (but without holding up // the backend or other resource requests, because it's a bit expensive and not critical). const viewCount = await getWorkViews(workId); const updateViewCount = (viewCount) => { const viewCountNumber = Number(viewCount); if (!viewCountNumber) { throw new Error('Failed to parse view count'); } const commaizedViewCount = viewCountNumber.toLocaleString(); const viewCountBody = document.getElementById('work-metadata-view-count'); if (viewCountBody) { viewCountBody.textContent = `${commaizedViewCount} views`; } else { throw new Error('Failed to find work views element'); } }; // If the DOM is still loading, wait for it to be ready before updating the view count. if (document.readyState === "loading") { document.addEventListener('DOMContentLoaded', () => { updateViewCount(viewCount); }); // Otherwise, just update it immediately. } else { updateViewCount(viewCount); } })();</script></div><div class="ds-work-card--button-container"><button class="ds2-5-button js-swp-download-button" data-signup-modal="{"location":"continue-reading-button--work-card","attachmentId":2966014,"attachmentType":"pdf","workUrl":"https://www.academia.edu/575713/Non_linear_statistical_models_for_the_3D_reconstruction_of_human_pose_and_motion_from_monocular_image_sequences"}">See full PDF</button><button class="ds2-5-button ds2-5-button--secondary js-swp-download-button" data-signup-modal="{"location":"download-pdf-button--work-card","attachmentId":2966014,"attachmentType":"pdf","workUrl":"https://www.academia.edu/575713/Non_linear_statistical_models_for_the_3D_reconstruction_of_human_pose_and_motion_from_monocular_image_sequences"}"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">download</span>Download PDF</button></div></div></div></div><div data-auto_select="false" data-client_id="331998490334-rsn3chp12mbkiqhl6e7lu2q0mlbu0f1b" data-doc_id="2966014" data-landing_url="https://www.academia.edu/575713/Non_linear_statistical_models_for_the_3D_reconstruction_of_human_pose_and_motion_from_monocular_image_sequences" data-login_uri="https://www.academia.edu/registrations/google_one_tap" data-moment_callback="onGoogleOneTapEvent" id="g_id_onload"></div><div class="ds-top-related-works--grid-container"><div class="ds-related-content--container ds-top-related-works--container"><h2 class="ds-related-content--heading">Related papers</h2><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="0" data-entity-id="575714" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/575714/Reconstructing_3d_pose_and_motion_from_a_single_camera_view">Reconstructing 3d pose and motion from a single camera view</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="5692" href="https://surrey.academia.edu/RichardBowden">Richard Bowden</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Proceedings of the British Machine …, 1998</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Reconstructing 3d pose and motion from a single camera view","attachmentId":2966015,"attachmentType":"pdf","work_url":"https://www.academia.edu/575714/Reconstructing_3d_pose_and_motion_from_a_single_camera_view","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/575714/Reconstructing_3d_pose_and_motion_from_a_single_camera_view"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="1" data-entity-id="18155894" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/18155894/Temporal_motion_models_for_monocular_and_multiview_3D_human_body_tracking">Temporal motion models for monocular and multiview 3D human body tracking</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="38120728" href="https://independent.academia.edu/RaquelUrtasun">Raquel Urtasun</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Computer Vision and Image Understanding, 2006</p><p class="ds-related-work--abstract ds2-5-body-sm">We explore an approach to 3D people tracking with learned motion models and deterministic optimization. The tracking problem is formulated as the minimization of a differentiable criterion whose differential structure is rich enough for optimization to be accomplished via hill-climbing. This avoids the computational expense of Monte Carlo methods, while yielding good results under challenging conditions. To demonstrate the generality of the approach we show that we can learn and track cyclic motions such as walking and running, as well as acyclic motions such as a golf swing. We also show results from both monocular and multi-camera tracking. Finally, we provide results with a motion model learned from multiple activities, and show how this models might be used for recognition.</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Temporal motion models for monocular and multiview 3D human body tracking","attachmentId":39905105,"attachmentType":"pdf","work_url":"https://www.academia.edu/18155894/Temporal_motion_models_for_monocular_and_multiview_3D_human_body_tracking","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/18155894/Temporal_motion_models_for_monocular_and_multiview_3D_human_body_tracking"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="2" data-entity-id="7944438" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/7944438/Modeling_Human_Bodies_from_Video_Sequences">Modeling Human Bodies from Video Sequences</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="15117113" href="https://epfl.academia.edu/RalfPl%C3%A4nkers">Ralf Plänkers</a></div><p class="ds-related-work--metadata ds2-5-body-xs">1999</p><p class="ds-related-work--abstract ds2-5-body-sm">In this paper, we show that, given video sequences of a moving person acquired with a multi-camera system, we can track joint locations during the movement and recover shape information. We outline techniques for fitting a simplified model to the noisy 3-D data extracted from the images and a new tracking process based on least squares matching is presented. The recovered shape and motion parameters can be used to either reconstruct the original sequence or to allow other animation models to mimic the subject's actions. Our utlimate goal is to automate the process of building complete and realistic animation models of humans, given a set of video sequences.</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Modeling Human Bodies from Video Sequences","attachmentId":48280565,"attachmentType":"pdf","work_url":"https://www.academia.edu/7944438/Modeling_Human_Bodies_from_Video_Sequences","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/7944438/Modeling_Human_Bodies_from_Video_Sequences"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="3" data-entity-id="13350173" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/13350173/Stochastic_tracking_of_3D_human_figures_using_2D_image_motion">Stochastic tracking of 3D human figures using 2D image motion</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32587331" href="https://espol.academia.edu/MichaelBlack">Michael Black</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2000</p><p class="ds-related-work--abstract ds2-5-body-sm">A probabilistic method for tracking 3D articulated human figures in monocular image sequences is presented. Within a Bayesian framework, we define a generative model of image appearance, a robust likelihood function based on image graylevel differences, and a prior probability distribution over pose and joint angles that models how humans move. The posterior probability distribution over model parameters is represented using a discrete set of samples and is propagated over time using particle filtering. The approach extends previous work on parameterized optical flow estimation to exploit a complex 3D articulated motion model. It also extends previous work on human motion tracking by including a perspective camera model, by modeling limb self occlusion, and by recovering 3D motion from a monocular sequence. The explicit posterior probability distribution represents ambiguities due to image matching, model singularities, and perspective projection. The method relies only on a frame-to-frame assumption of brightness constancy and hence is able to track people under changing viewpoints, in grayscale image sequences, and with complex unknown backgrounds.</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Stochastic tracking of 3D human figures using 2D image motion","attachmentId":45440181,"attachmentType":"pdf","work_url":"https://www.academia.edu/13350173/Stochastic_tracking_of_3D_human_figures_using_2D_image_motion","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/13350173/Stochastic_tracking_of_3D_human_figures_using_2D_image_motion"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="4" data-entity-id="12775177" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/12775177/_title_Modeling_human_bodies_from_video_sequences_title_"><title>Modeling human bodies from video sequences</title></a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="31831284" href="https://ethz.academia.edu/ArminGruen">Armin Gruen</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Videometrics VI, 1998</p><p class="ds-related-work--abstract ds2-5-body-sm">In this paper, we show that, given video sequences of a moving person acquired with a multi-camera system, we can track joint locations during the movement and recover shape information. We outline techniques for fitting a simplified model to the noisy 3-D data extracted from the images and a new tracking process based on least squares matching is presented. The recovered shape and motion parameters can be used to either reconstruct the original sequence or to allow other animation models to mimic the subject's actions. Our utlimate goal is to automate the process of building complete and realistic animation models of humans, given a set of video sequences.</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"\u003ctitle\u003eModeling human bodies from video sequences\u003c/title\u003e","attachmentId":45943639,"attachmentType":"pdf","work_url":"https://www.academia.edu/12775177/_title_Modeling_human_bodies_from_video_sequences_title_","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/12775177/_title_Modeling_human_bodies_from_video_sequences_title_"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="5" data-entity-id="7944433" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/7944433/HUMAN_BODY_MODELING_AND_MOTION_ANALYSIS_FROM_VIDEO_SEQUENCES">HUMAN BODY MODELING AND MOTION ANALYSIS FROM VIDEO SEQUENCES</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="15117113" href="https://epfl.academia.edu/RalfPl%C3%A4nkers">Ralf Plänkers</a></div><p class="ds-related-work--metadata ds2-5-body-xs">1998</p><p class="ds-related-work--abstract ds2-5-body-sm">We present a comprehensive concept to fit animation models to a variety of different data derived from multi-image video sequences. Our goal is to record dynamically the body surface of a human in motion and to model it for animation purposes. This includes setting up and calibrating a system of three CCD-cameras, extracting image silhouettes, tracking individual key body points in 3-D, and generating surface data by stereo or multi-image matching. All these observations are brought together under a joint least squares estimation system, from which the body model parameters are derived. This represents a first report concerning our concept. The presented data stems from individual tests and is highly incomplete. However, these results support strongly the chosen concept and will lead to further developments and refinements.</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"HUMAN BODY MODELING AND MOTION ANALYSIS FROM VIDEO SEQUENCES","attachmentId":48280612,"attachmentType":"pdf","work_url":"https://www.academia.edu/7944433/HUMAN_BODY_MODELING_AND_MOTION_ANALYSIS_FROM_VIDEO_SEQUENCES","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/7944433/HUMAN_BODY_MODELING_AND_MOTION_ANALYSIS_FROM_VIDEO_SEQUENCES"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="6" data-entity-id="7944426" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/7944426/Tracking_and_Modeling_People_in_Video_Sequences">Tracking and Modeling People in Video Sequences</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="15117113" href="https://epfl.academia.edu/RalfPl%C3%A4nkers">Ralf Plänkers</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Computer Vision and Image Understanding, 2001</p><p class="ds-related-work--abstract ds2-5-body-sm">Tracking and modeling people from video sequences has become an increasingly important research topic, with applications including animation, surveillance and sports medicine. In this paper, we propose a model based 3-D approach to recovering both body shape and motion. It takes advantage of a sophisticated animation model to achieve both robustness and realism. Stereo sequences of people in motion serve as input to our system. From these, we extract a 2 1 2 -D description of the scene and, optionally, silhouette edges. We propose an integrated framework to fit the model and to track the person's motion. The environment does not have to be engineered. We recover not only the motion but also a full animation model closely resembling the subject. We present results of our system on real sequences and we show the generic model adjusting to the person and following various kinds of motion.</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Tracking and Modeling People in Video Sequences","attachmentId":48280558,"attachmentType":"pdf","work_url":"https://www.academia.edu/7944426/Tracking_and_Modeling_People_in_Video_Sequences","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/7944426/Tracking_and_Modeling_People_in_Video_Sequences"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="7" data-entity-id="49722683" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/49722683/Human_body_modelling_and_tracking_using_volumetric_representation_Selected_recent_studies_and_possibilities_for_extensions">Human body modelling and tracking using volumetric representation: Selected recent studies and possibilities for extensions</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="162436479" href="https://cnn.academia.edu/MohanTrivedi">Mohan Trivedi</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2008 Second ACM/IEEE International Conference on Distributed Smart Cameras, 2008</p><p class="ds-related-work--abstract ds2-5-body-sm">Articulated human body modeling and tracking from vision data is an attractive research area with many potential applications. There has been a tremendous amount of related research works in this area. Therefore, having a comprehensive insight into high quality existing works and awareness of the research frontier in the area is essential for follow-up research studies. With that objective, this paper provides a review of the subarea of model based methods for human body modeling and tracking using volumetric (voxel) data. We will focus on analyzing and comparing some recent techniques, especially which are in the past two years, in order to highlight trends in the domain as well as to point out limitations of the current state of the art. Based on this analysis, we will discuss our idea of combining Laplacian Eigenspace (LE) based voxel segmentation [20] and Kinematically Constrained Gaussian Mixture Model (KC-GMM) method [3] to have a more powerful human body pose estimation system as well as discuss other possibilities for future work.</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Human body modelling and tracking using volumetric representation: Selected recent studies and possibilities for extensions","attachmentId":67986098,"attachmentType":"pdf","work_url":"https://www.academia.edu/49722683/Human_body_modelling_and_tracking_using_volumetric_representation_Selected_recent_studies_and_possibilities_for_extensions","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/49722683/Human_body_modelling_and_tracking_using_volumetric_representation_Selected_recent_studies_and_possibilities_for_extensions"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="8" data-entity-id="25120370" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/25120370/Tracking_of_the_Articulated_Upper_Body_on_Multi_View_Stereo_Image_Sequences">Tracking of the Articulated Upper Body on Multi-View Stereo Image Sequences</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="48335347" href="https://independent.academia.edu/KNickel2">Kai Nickel</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1 (CVPR'06), 2000</p><p class="ds-related-work--abstract ds2-5-body-sm">We propose a novel method for tracking an articulated model in a 3D-point cloud. The tracking problem is formulated as the registration of two point sets, one of them parameterised by the model's state vector and the other acquired from a 3D-sensor system. Finding the correct parameter vector is posed as a linear estimation problem, which is solved by means of a scaled unscented Kalman filter. Our method draws on concepts from the widely used iterative closest point registration algorithm (ICP), basing the measurement model on point correspondences established between the synthesised model point cloud and the measured 3D-data. We apply the algorithm to kinematically track a model of the human upper body on a point cloud obtained through stereo image processing from one or more stereo cameras. We determine torso position and orientation as well as joint angles of shoulders and elbows. The algorithm has been successfully tested on thousands of frames of real image data. Challenging sequences of several minutes length where tracked correctly. Complete processing time remains below one second per frame.</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Tracking of the Articulated Upper Body on Multi-View Stereo Image Sequences","attachmentId":45440982,"attachmentType":"pdf","work_url":"https://www.academia.edu/25120370/Tracking_of_the_Articulated_Upper_Body_on_Multi_View_Stereo_Image_Sequences","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/25120370/Tracking_of_the_Articulated_Upper_Body_on_Multi_View_Stereo_Image_Sequences"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="9" data-entity-id="32236829" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/32236829/Model_based_human_gait_tracking_3D_reconstruction_and_recognition_in_uncalibrated_monocular_video">Model-based human gait tracking, 3D reconstruction and recognition in uncalibrated monocular video</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="62482908" href="https://independent.academia.edu/FarzadZargari">Farzad Zargari</a></div><p class="ds-related-work--metadata ds2-5-body-xs">The Imaging Science Journal, 2012</p><p class="ds-related-work--abstract ds2-5-body-sm">Automatic analysis of human motion includes initialisation, tracking, pose recovery and activity recognition. In this paper, a computing framework is developed to automatically analyse human motions through uncalibrated monocular video sequences. A model-based kinematics approach is proposed for human gait tracking. Based on the tracking results, 3D human poses and gait features are recovered and extracted. The recognition performance is evaluated by using different classifiers. The proposed method is advantageous in its capability of recognising human subjects walking non-parallel to the image plane.</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Model-based human gait tracking, 3D reconstruction and recognition in uncalibrated monocular video","attachmentId":52460672,"attachmentType":"pdf","work_url":"https://www.academia.edu/32236829/Model_based_human_gait_tracking_3D_reconstruction_and_recognition_in_uncalibrated_monocular_video","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/32236829/Model_based_human_gait_tracking_3D_reconstruction_and_recognition_in_uncalibrated_monocular_video"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div></div></div><div class="ds-sticky-ctas--wrapper js-loswp-sticky-ctas hidden"><div class="ds-sticky-ctas--grid-container"><div class="ds-sticky-ctas--container"><button class="ds2-5-button js-swp-download-button" data-signup-modal="{"location":"continue-reading-button--sticky-ctas","attachmentId":2966014,"attachmentType":"pdf","workUrl":null}">See full PDF</button><button class="ds2-5-button ds2-5-button--secondary js-swp-download-button" data-signup-modal="{"location":"download-pdf-button--sticky-ctas","attachmentId":2966014,"attachmentType":"pdf","workUrl":null}"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">download</span>Download PDF</button></div></div></div><div class="ds-below-fold--grid-container"><div class="ds-work--container js-loswp-embedded-document"><div class="attachment_preview" data-attachment="Attachment_2966014" style="display: none"><div class="js-scribd-document-container"><div class="scribd--document-loading js-scribd-document-loader" style="display: block;"><img alt="Loading..." src="//a.academia-assets.com/images/loaders/paper-load.gif" /><p>Loading Preview</p></div></div><div style="text-align: center;"><div class="scribd--no-preview-alert js-preview-unavailable"><p>Sorry, preview is currently unavailable. You can download the paper by clicking the button above.</p></div></div></div></div><div class="ds-sidebar--container js-work-sidebar"><div class="ds-related-content--container"><h2 class="ds-related-content--heading">Related papers</h2><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="0" data-entity-id="69190894" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/69190894/A_hierarchical_model_of_dynamics_for_tracking_people_with_a_single_video_camera">A hierarchical model of dynamics for tracking people with a single video camera</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="67890799" href="https://independent.academia.edu/DavidMarshall70">David Marshall</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2000</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"A hierarchical model of dynamics for tracking people with a single video camera","attachmentId":79380391,"attachmentType":"pdf","work_url":"https://www.academia.edu/69190894/A_hierarchical_model_of_dynamics_for_tracking_people_with_a_single_video_camera","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/69190894/A_hierarchical_model_of_dynamics_for_tracking_people_with_a_single_video_camera"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="1" data-entity-id="18927944" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/18927944/Tracking_people_in_three_dimensions_using_a_hierarchical_model_of_dynamics">Tracking people in three dimensions using a hierarchical model of dynamics</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="39476646" href="https://independent.academia.edu/PeterHall32">Peter Hall</a><span>, </span><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="39055643" href="https://independent.academia.edu/YuliaHicks">Yulia Hicks</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Image and Vision Computing, 2002</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Tracking people in three dimensions using a hierarchical model of dynamics","attachmentId":42190843,"attachmentType":"pdf","work_url":"https://www.academia.edu/18927944/Tracking_people_in_three_dimensions_using_a_hierarchical_model_of_dynamics","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/18927944/Tracking_people_in_three_dimensions_using_a_hierarchical_model_of_dynamics"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="2" data-entity-id="6816761" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/6816761/Recovering_3D_Human_Pose_from_Monocular_Images">Recovering 3D Human Pose from Monocular Images</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="11325663" href="https://independent.academia.edu/AnkurAgarwal14">Ankur Agarwal</a></div><p class="ds-related-work--metadata ds2-5-body-xs">IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Recovering 3D Human Pose from Monocular Images","attachmentId":33518566,"attachmentType":"pdf","work_url":"https://www.academia.edu/6816761/Recovering_3D_Human_Pose_from_Monocular_Images","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/6816761/Recovering_3D_Human_Pose_from_Monocular_Images"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="3" data-entity-id="7620880" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/7620880/2D_Silhouette_and_3D_Skeletal_Models_for_Human_Detection_and_Tracking">2D Silhouette and 3D Skeletal Models for Human Detection and Tracking</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="13769259" href="https://unizar.academia.edu/EliasHerreroJaraba">Elias Herrero Jaraba</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2004</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"2D Silhouette and 3D Skeletal Models for Human Detection and Tracking","attachmentId":48395268,"attachmentType":"pdf","work_url":"https://www.academia.edu/7620880/2D_Silhouette_and_3D_Skeletal_Models_for_Human_Detection_and_Tracking","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/7620880/2D_Silhouette_and_3D_Skeletal_Models_for_Human_Detection_and_Tracking"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="4" data-entity-id="55761111" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/55761111/Model_based_estimation_of_3D_human_motion">Model-based estimation of 3D human motion</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="26862653" href="https://independent.academia.edu/DMetaxas">Dimitris Metaxas</a></div><p class="ds-related-work--metadata ds2-5-body-xs">IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Model-based estimation of 3D human motion","attachmentId":71479041,"attachmentType":"pdf","work_url":"https://www.academia.edu/55761111/Model_based_estimation_of_3D_human_motion","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/55761111/Model_based_estimation_of_3D_human_motion"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="5" data-entity-id="34810490" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/34810490/Automatic_Detection_and_Tracking_of_Human_Motion_with_a_View_Based_Representation">Automatic Detection and Tracking of Human Motion with a View-Based Representation</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="62991013" href="https://independent.academia.edu/RFablet">Ronan Fablet</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Lecture Notes in Computer Science, 2002</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Automatic Detection and Tracking of Human Motion with a View-Based Representation","attachmentId":54669528,"attachmentType":"pdf","work_url":"https://www.academia.edu/34810490/Automatic_Detection_and_Tracking_of_Human_Motion_with_a_View_Based_Representation","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/34810490/Automatic_Detection_and_Tracking_of_Human_Motion_with_a_View_Based_Representation"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="6" data-entity-id="27684743" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/27684743/A_Human_Body_Model_for_Articulated_3D_Pose_Tracking">A Human Body Model for Articulated 3D Pose Tracking</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="51916146" href="https://independent.academia.edu/SteffenKnoop">Steffen Knoop</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Humanoid Robots: New Developments, 2007</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"A Human Body Model for Articulated 3D Pose Tracking","attachmentId":47949997,"attachmentType":"pdf","work_url":"https://www.academia.edu/27684743/A_Human_Body_Model_for_Articulated_3D_Pose_Tracking","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/27684743/A_Human_Body_Model_for_Articulated_3D_Pose_Tracking"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="7" data-entity-id="15672636" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/15672636/Modeling_people_Vision_based_understanding_of_a_person_s_shape_appearance_movement_and_behaviour">Modeling people: Vision-based understanding of a person’s shape, appearance, movement, and behaviour</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="34859816" href="https://surrey.academia.edu/AdrianHilton">Adrian Hilton</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Computer Vision and Image Understanding, 2006</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Modeling people: Vision-based understanding of a person’s shape, appearance, movement, and behaviour","attachmentId":42981985,"attachmentType":"pdf","work_url":"https://www.academia.edu/15672636/Modeling_people_Vision_based_understanding_of_a_person_s_shape_appearance_movement_and_behaviour","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/15672636/Modeling_people_Vision_based_understanding_of_a_person_s_shape_appearance_movement_and_behaviour"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="8" data-entity-id="995115" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/995115/A_statistical_model_of_human_pose_and_body_shape">A statistical model of human pose and body shape</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="189366" href="https://mpi-inf-mpg.academia.edu/NilsHasler">Nils Hasler</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Computer Graphics …, 2009</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"A statistical model of human pose and body shape","attachmentId":30911494,"attachmentType":"pdf","work_url":"https://www.academia.edu/995115/A_statistical_model_of_human_pose_and_body_shape","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/995115/A_statistical_model_of_human_pose_and_body_shape"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="9" data-entity-id="17048732" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/17048732/3D_Reconstruction_of_Human_Motion_from_Video">3D Reconstruction of Human Motion from Video</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="36606296" href="https://uni-bonn.academia.edu/Bj%C3%B6rnKr%C3%BCger">Björn Krüger</a></div><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"3D Reconstruction of Human Motion from Video","attachmentId":42342829,"attachmentType":"pdf","work_url":"https://www.academia.edu/17048732/3D_Reconstruction_of_Human_Motion_from_Video","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/17048732/3D_Reconstruction_of_Human_Motion_from_Video"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="10" data-entity-id="8089041" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/8089041/A_robust_appearance_model_for_tracking_human_motions">A robust appearance model for tracking human motions</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="15650911" href="https://universite-lyon2.academia.edu/SergeMiguet">Serge Miguet</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2005</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"A robust appearance model for tracking human motions","attachmentId":48217869,"attachmentType":"pdf","work_url":"https://www.academia.edu/8089041/A_robust_appearance_model_for_tracking_human_motions","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/8089041/A_robust_appearance_model_for_tracking_human_motions"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="11" data-entity-id="43576551" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/43576551/_title_An_efficient_stochastic_framework_for_3D_human_motion_tracking_title_"><title>An efficient stochastic framework for 3D human motion tracking</title></a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="14623838" href="https://sutd.academia.edu/httpssutdedusgAboutPeopleSUTDLeadershipProfessorAshrafAKassim">ashraf kassim</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Three-Dimensional Image Capture and Applications 2008, 2008</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"\u003ctitle\u003eAn efficient stochastic framework for 3D human motion tracking\u003c/title\u003e","attachmentId":63882979,"attachmentType":"pdf","work_url":"https://www.academia.edu/43576551/_title_An_efficient_stochastic_framework_for_3D_human_motion_tracking_title_","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/43576551/_title_An_efficient_stochastic_framework_for_3D_human_motion_tracking_title_"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="12" data-entity-id="13770171" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/13770171/Model_based_automatic_tracking_of_articulated_human_movement">Model-based automatic tracking of articulated human movement</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32879943" href="https://bath.academia.edu/GrantTrewartha">Grant Trewartha</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Sports Engineering, 2004</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Model-based automatic tracking of articulated human movement","attachmentId":44973262,"attachmentType":"pdf","work_url":"https://www.academia.edu/13770171/Model_based_automatic_tracking_of_articulated_human_movement","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/13770171/Model_based_automatic_tracking_of_articulated_human_movement"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="13" data-entity-id="93041541" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/93041541/A_comparison_of_3d_model_based_tracking_approaches_for_human_motion_capture_in_uncontrolled_environments">A comparison of 3d model-based tracking approaches for human motion capture in uncontrolled environments</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="250179276" href="https://independent.academia.edu/mohammedshaheen62">mohammed shaheen</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2009 Workshop on Applications of Computer Vision (WACV), 2009</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"A comparison of 3d model-based tracking approaches for human motion capture in uncontrolled environments","attachmentId":95889753,"attachmentType":"pdf","work_url":"https://www.academia.edu/93041541/A_comparison_of_3d_model_based_tracking_approaches_for_human_motion_capture_in_uncontrolled_environments","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/93041541/A_comparison_of_3d_model_based_tracking_approaches_for_human_motion_capture_in_uncontrolled_environments"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="14" data-entity-id="79374685" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/79374685/Human_model_and_pose_Reconstruction_from_Multi_views">Human model and pose Reconstruction from Multi-views</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="55129879" href="https://independent.academia.edu/BriceMichoud">Brice Michoud</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2005</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Human model and pose Reconstruction from Multi-views","attachmentId":86111468,"attachmentType":"pdf","work_url":"https://www.academia.edu/79374685/Human_model_and_pose_Reconstruction_from_Multi_views","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/79374685/Human_model_and_pose_Reconstruction_from_Multi_views"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="15" data-entity-id="68727622" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/68727622/Coupling_Top_down_and_Bottom_up_Methods_for_3D_Human_Pose_and_Shape_Estimation_from_Monocular_Image_Sequences">Coupling Top-down and Bottom-up Methods for 3D Human Pose and Shape Estimation from Monocular Image Sequences</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="7107871" href="https://independent.academia.edu/AtulKanaujia">Atul Kanaujia</a></div><p class="ds-related-work--metadata ds2-5-body-xs">ArXiv, 2014</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Coupling Top-down and Bottom-up Methods for 3D Human Pose and Shape Estimation from Monocular Image Sequences","attachmentId":79103373,"attachmentType":"pdf","work_url":"https://www.academia.edu/68727622/Coupling_Top_down_and_Bottom_up_Methods_for_3D_Human_Pose_and_Shape_Estimation_from_Monocular_Image_Sequences","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/68727622/Coupling_Top_down_and_Bottom_up_Methods_for_3D_Human_Pose_and_Shape_Estimation_from_Monocular_Image_Sequences"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="16" data-entity-id="7745447" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/7745447/Learning_to_track_3D_human_motion_from_silhouettes">Learning to track 3D human motion from silhouettes</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="14206950" href="https://independent.academia.edu/AnkurAgarwal17">Ankur Agarwal</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2004</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Learning to track 3D human motion from silhouettes","attachmentId":48353281,"attachmentType":"pdf","work_url":"https://www.academia.edu/7745447/Learning_to_track_3D_human_motion_from_silhouettes","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/7745447/Learning_to_track_3D_human_motion_from_silhouettes"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div></div><div class="ds-related-content--container"><h2 class="ds-related-content--heading">Related topics</h2><div class="ds-research-interests--pills-container"><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="88140" href="https://www.academia.edu/Documents/in/Human_Body">Human Body</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="171327" href="https://www.academia.edu/Documents/in/3-D_Reconstruction">3-D Reconstruction</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="664700" href="https://www.academia.edu/Documents/in/Statistical_Model">Statistical Model</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="813651" href="https://www.academia.edu/Documents/in/Model_Based_Approach">Model Based Approach</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="1015743" href="https://www.academia.edu/Documents/in/Image_Sequence">Image Sequence</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="1237788" href="https://www.academia.edu/Documents/in/Electrical_And_Electronic_Engineering">Electrical And Electronic Engine...</a></div></div></div></div></div><div class="footer--content"><ul class="footer--main-links hide-on-mobile"><li><a href="https://www.academia.edu/about">About</a></li><li><a href="https://www.academia.edu/press">Press</a></li><li><a href="https://www.academia.edu/documents">Papers</a></li><li><a href="https://www.academia.edu/topics">Topics</a></li><li><a href="https://www.academia.edu/hiring"><svg style="width: 13px; height: 13px; position: relative; bottom: -1px;" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="briefcase" class="svg-inline--fa fa-briefcase fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M320 336c0 8.84-7.16 16-16 16h-96c-8.84 0-16-7.16-16-16v-48H0v144c0 25.6 22.4 48 48 48h416c25.6 0 48-22.4 48-48V288H320v48zm144-208h-80V80c0-25.6-22.4-48-48-48H176c-25.6 0-48 22.4-48 48v48H48c-25.6 0-48 22.4-48 48v80h512v-80c0-25.6-22.4-48-48-48zm-144 0H192V96h128v32z"></path></svg> <strong>We're Hiring!</strong></a></li><li><a href="https://support.academia.edu/"><svg style="width: 12px; height: 12px; position: relative; bottom: -1px;" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="question-circle" class="svg-inline--fa fa-question-circle fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zM262.655 90c-54.497 0-89.255 22.957-116.549 63.758-3.536 5.286-2.353 12.415 2.715 16.258l34.699 26.31c5.205 3.947 12.621 3.008 16.665-2.122 17.864-22.658 30.113-35.797 57.303-35.797 20.429 0 45.698 13.148 45.698 32.958 0 14.976-12.363 22.667-32.534 33.976C247.128 238.528 216 254.941 216 296v4c0 6.627 5.373 12 12 12h56c6.627 0 12-5.373 12-12v-1.333c0-28.462 83.186-29.647 83.186-106.667 0-58.002-60.165-102-116.531-102zM256 338c-25.365 0-46 20.635-46 46 0 25.364 20.635 46 46 46s46-20.636 46-46c0-25.365-20.635-46-46-46z"></path></svg> <strong>Help Center</strong></a></li></ul><ul class="footer--research-interests"><li>Find new research papers in:</li><li><a href="https://www.academia.edu/Documents/in/Physics">Physics</a></li><li><a href="https://www.academia.edu/Documents/in/Chemistry">Chemistry</a></li><li><a href="https://www.academia.edu/Documents/in/Biology">Biology</a></li><li><a href="https://www.academia.edu/Documents/in/Health_Sciences">Health Sciences</a></li><li><a href="https://www.academia.edu/Documents/in/Ecology">Ecology</a></li><li><a href="https://www.academia.edu/Documents/in/Earth_Sciences">Earth Sciences</a></li><li><a href="https://www.academia.edu/Documents/in/Cognitive_Science">Cognitive Science</a></li><li><a href="https://www.academia.edu/Documents/in/Mathematics">Mathematics</a></li><li><a href="https://www.academia.edu/Documents/in/Computer_Science">Computer Science</a></li></ul><ul class="footer--legal-links hide-on-mobile"><li><a href="https://www.academia.edu/terms">Terms</a></li><li><a href="https://www.academia.edu/privacy">Privacy</a></li><li><a href="https://www.academia.edu/copyright">Copyright</a></li><li>Academia ©2024</li></ul></div> </body> </html>