CINXE.COM

(PDF) Recovering Partial Shape from Perspective Matrix Using Simple Zoom-Lens Camera Model | okada yoshihiro - Academia.edu

<!DOCTYPE html> <html > <head> <meta charset="utf-8"> <meta rel="search" type="application/opensearchdescription+xml" href="/open_search.xml" title="Academia.edu"> <meta content="width=device-width, initial-scale=1" name="viewport"> <meta name="google-site-verification" content="bKJMBZA7E43xhDOopFZkssMMkBRjvYERV-NaN4R6mrs"> <meta name="csrf-param" content="authenticity_token" /> <meta name="csrf-token" content="CkSIZL8Wt3OoaK2tKe5n4Wje/lgEdHsgjKqespt9I6OwZpCisbKvnmn0PdiXzEYRAVT/lRdP4LCD/u940lzNXw==" /> <meta name="citation_title" content="Recovering Partial Shape from Perspective Matrix Using Simple Zoom-Lens Camera Model" /> <meta name="citation_publication_date" content="2004/01/01" /> <meta name="citation_journal_title" content="Ieej Transactions on Electronics, Information and Systems" /> <meta name="citation_author" content="okada yoshihiro" /> <meta name="twitter:card" content="summary" /> <meta name="twitter:url" content="https://www.academia.edu/84208092/Recovering_Partial_Shape_from_Perspective_Matrix_Using_Simple_Zoom_Lens_Camera_Model" /> <meta name="twitter:title" content="Recovering Partial Shape from Perspective Matrix Using Simple Zoom-Lens Camera Model" /> <meta name="twitter:description" content="In the 3D shape recovering using the perspective matrix, the calibration pattern needed to be contained in the input images. Therefore, it was difficult to use the images using zoom for 3D shape recovering. In this research, we used the simple" /> <meta name="twitter:image" content="https://0.academia-photos.com/219990267/77980469/66525094/s200_okada.yoshihiro.png" /> <meta property="fb:app_id" content="2369844204" /> <meta property="og:type" content="article" /> <meta property="og:url" content="https://www.academia.edu/84208092/Recovering_Partial_Shape_from_Perspective_Matrix_Using_Simple_Zoom_Lens_Camera_Model" /> <meta property="og:title" content="Recovering Partial Shape from Perspective Matrix Using Simple Zoom-Lens Camera Model" /> <meta property="og:image" content="http://a.academia-assets.com/images/open-graph-icons/fb-paper.gif" /> <meta property="og:description" content="In the 3D shape recovering using the perspective matrix, the calibration pattern needed to be contained in the input images. Therefore, it was difficult to use the images using zoom for 3D shape recovering. In this research, we used the simple" /> <meta property="article:author" content="https://independent.academia.edu/okadayoshihiro" /> <meta name="description" content="In the 3D shape recovering using the perspective matrix, the calibration pattern needed to be contained in the input images. Therefore, it was difficult to use the images using zoom for 3D shape recovering. In this research, we used the simple" /> <title>(PDF) Recovering Partial Shape from Perspective Matrix Using Simple Zoom-Lens Camera Model | okada yoshihiro - Academia.edu</title> <link rel="canonical" href="https://www.academia.edu/84208092/Recovering_Partial_Shape_from_Perspective_Matrix_Using_Simple_Zoom_Lens_Camera_Model" /> <script async src="https://www.googletagmanager.com/gtag/js?id=G-5VKX33P2DS"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-5VKX33P2DS', { cookie_domain: 'academia.edu', send_page_view: false, }); gtag('event', 'page_view', { 'controller': "single_work", 'action': "show", 'controller_action': 'single_work#show', 'logged_in': 'false', 'edge': 'unknown', // Send nil if there is no A/B test bucket, in case some records get logged // with missing data - that way we can distinguish between the two cases. // ab_test_bucket should be of the form <ab_test_name>:<bucket> 'ab_test_bucket': null, }) </script> <script> var $controller_name = 'single_work'; var $action_name = "show"; var $rails_env = 'production'; var $app_rev = '9387f500ddcbb8d05c67bef28a2fe0334f1aafb8'; var $domain = 'academia.edu'; var $app_host = "academia.edu"; var $asset_host = "academia-assets.com"; var $start_time = new Date().getTime(); var $recaptcha_key = "6LdxlRMTAAAAADnu_zyLhLg0YF9uACwz78shpjJB"; var $recaptcha_invisible_key = "6Lf3KHUUAAAAACggoMpmGJdQDtiyrjVlvGJ6BbAj"; var $disableClientRecordHit = false; </script> <script> window.require = { config: function() { return function() {} } } </script> <script> window.Aedu = window.Aedu || {}; window.Aedu.hit_data = null; window.Aedu.serverRenderTime = new Date(1732997673000); window.Aedu.timeDifference = new Date().getTime() - 1732997673000; </script> <script type="application/ld+json">{"@context":"https://schema.org","@type":"ScholarlyArticle","abstract":"In the 3D shape recovering using the perspective matrix, the calibration pattern needed to be contained in the input images. Therefore, it was difficult to use the images using zoom for 3D shape recovering. In this research, we used the simple zoom-lens camera model. And we realized the camera calibration in the state where the calibration pattern is not contained in the input images. Moreover, the validity of this research was shown by performing 3D shape recovering from the images using zoom.","author":[{"@context":"https://schema.org","@type":"Person","name":"okada yoshihiro"}],"contributor":[],"dateCreated":"2022-08-05","dateModified":null,"datePublished":"2004-01-01","headline":"Recovering Partial Shape from Perspective Matrix Using Simple Zoom-Lens Camera Model","inLanguage":"ja","keywords":["Computer Science","Artificial Intelligence","Computer Vision","Camera Calibration","Zoom"],"locationCreated":null,"publication":"Ieej Transactions on Electronics, Information and Systems","publisher":{"@context":"https://schema.org","@type":"Organization","name":null},"image":null,"thumbnailUrl":null,"url":"https://www.academia.edu/84208092/Recovering_Partial_Shape_from_Perspective_Matrix_Using_Simple_Zoom_Lens_Camera_Model","sourceOrganization":[{"@context":"https://schema.org","@type":"EducationalOrganization","name":null}]}</script><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/single_work_page/loswp-102fa537001ba4d8dcd921ad9bd56c474abc201906ea4843e7e7efe9dfbf561d.css" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/body-8d679e925718b5e8e4b18e9a4fab37f7eaa99e43386459376559080ac8f2856a.css" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/button-3cea6e0ad4715ed965c49bfb15dedfc632787b32ff6d8c3a474182b231146ab7.css" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/text_button-73590134e40cdb49f9abdc8e796cc00dc362693f3f0f6137d6cf9bb78c318ce7.css" /><link crossorigin="" href="https://fonts.gstatic.com/" rel="preconnect" /><link href="https://fonts.googleapis.com/css2?family=DM+Sans:ital,opsz,wght@0,9..40,100..1000;1,9..40,100..1000&amp;family=Gupter:wght@400;500;700&amp;family=IBM+Plex+Mono:wght@300;400&amp;family=Material+Symbols+Outlined:opsz,wght,FILL,GRAD@20,400,0,0&amp;display=swap" rel="stylesheet" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/common-10fa40af19d25203774df2d4a03b9b5771b45109c2304968038e88a81d1215c5.css" /> </head> <body> <div id='react-modal'></div> <div class="js-upgrade-ie-banner" style="display: none; text-align: center; padding: 8px 0; background-color: #ebe480;"><p style="color: #000; font-size: 12px; margin: 0 0 4px;">Academia.edu no longer supports Internet Explorer.</p><p style="color: #000; font-size: 12px; margin: 0;">To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to&nbsp;<a href="https://www.academia.edu/upgrade-browser">upgrade your browser</a>.</p></div><script>// Show this banner for all versions of IE if (!!window.MSInputMethodContext || /(MSIE)/.test(navigator.userAgent)) { document.querySelector('.js-upgrade-ie-banner').style.display = 'block'; }</script> <div class="bootstrap login"><div class="modal fade login-modal" id="login-modal"><div class="login-modal-dialog modal-dialog"><div class="modal-content"><div class="modal-header"><button class="close close" data-dismiss="modal" type="button"><span aria-hidden="true">&times;</span><span class="sr-only">Close</span></button><h4 class="modal-title text-center"><strong>Log In</strong></h4></div><div class="modal-body"><div class="row"><div class="col-xs-10 col-xs-offset-1"><button class="btn btn-fb btn-lg btn-block btn-v-center-content" id="login-facebook-oauth-button"><svg style="float: left; width: 19px; line-height: 1em; margin-right: .3em;" aria-hidden="true" focusable="false" data-prefix="fab" data-icon="facebook-square" class="svg-inline--fa fa-facebook-square fa-w-14" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 448 512"><path fill="currentColor" d="M400 32H48A48 48 0 0 0 0 80v352a48 48 0 0 0 48 48h137.25V327.69h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.27c-30.81 0-40.42 19.12-40.42 38.73V256h68.78l-11 71.69h-57.78V480H400a48 48 0 0 0 48-48V80a48 48 0 0 0-48-48z"></path></svg><small><strong>Log in</strong> with <strong>Facebook</strong></small></button><br /><button class="btn btn-google btn-lg btn-block btn-v-center-content" id="login-google-oauth-button"><svg style="float: left; width: 22px; line-height: 1em; margin-right: .3em;" aria-hidden="true" focusable="false" data-prefix="fab" data-icon="google-plus" class="svg-inline--fa fa-google-plus fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M256,8C119.1,8,8,119.1,8,256S119.1,504,256,504,504,392.9,504,256,392.9,8,256,8ZM185.3,380a124,124,0,0,1,0-248c31.3,0,60.1,11,83,32.3l-33.6,32.6c-13.2-12.9-31.3-19.1-49.4-19.1-42.9,0-77.2,35.5-77.2,78.1S142.3,334,185.3,334c32.6,0,64.9-19.1,70.1-53.3H185.3V238.1H302.2a109.2,109.2,0,0,1,1.9,20.7c0,70.8-47.5,121.2-118.8,121.2ZM415.5,273.8v35.5H380V273.8H344.5V238.3H380V202.8h35.5v35.5h35.2v35.5Z"></path></svg><small><strong>Log in</strong> with <strong>Google</strong></small></button><br /><style type="text/css">.sign-in-with-apple-button { width: 100%; height: 52px; border-radius: 3px; border: 1px solid black; cursor: pointer; }</style><script src="https://appleid.cdn-apple.com/appleauth/static/jsapi/appleid/1/en_US/appleid.auth.js" type="text/javascript"></script><div class="sign-in-with-apple-button" data-border="false" data-color="white" id="appleid-signin"><span &nbsp;&nbsp;="Sign Up with Apple" class="u-fs11"></span></div><script>AppleID.auth.init({ clientId: 'edu.academia.applesignon', scope: 'name email', redirectURI: 'https://www.academia.edu/sessions', state: "f1e09669f306d855f366c70d764919d57c568aafd0b8e4b9fed796cf47a4e2a0", });</script><script>// Hacky way of checking if on fast loswp if (window.loswp == null) { (function() { const Google = window?.Aedu?.Auth?.OauthButton?.Login?.Google; const Facebook = window?.Aedu?.Auth?.OauthButton?.Login?.Facebook; if (Google) { new Google({ el: '#login-google-oauth-button', rememberMeCheckboxId: 'remember_me', track: null }); } if (Facebook) { new Facebook({ el: '#login-facebook-oauth-button', rememberMeCheckboxId: 'remember_me', track: null }); } })(); }</script></div></div></div><div class="modal-body"><div class="row"><div class="col-xs-10 col-xs-offset-1"><div class="hr-heading login-hr-heading"><span class="hr-heading-text">or</span></div></div></div></div><div class="modal-body"><div class="row"><div class="col-xs-10 col-xs-offset-1"><form class="js-login-form" action="https://www.academia.edu/sessions" accept-charset="UTF-8" method="post"><input name="utf8" type="hidden" value="&#x2713;" autocomplete="off" /><input type="hidden" name="authenticity_token" value="E7BG4V9SehF6KJmPWVasgbu8wsyQT94ryUvWlqDtxiqpkl4nUfZi/Lu0CfrndI1x0jbDAYN0RbvGH6dc6cwo1g==" autocomplete="off" /><div class="form-group"><label class="control-label" for="login-modal-email-input" style="font-size: 14px;">Email</label><input class="form-control" id="login-modal-email-input" name="login" type="email" /></div><div class="form-group"><label class="control-label" for="login-modal-password-input" style="font-size: 14px;">Password</label><input class="form-control" id="login-modal-password-input" name="password" type="password" /></div><input type="hidden" name="post_login_redirect_url" id="post_login_redirect_url" value="https://www.academia.edu/84208092/Recovering_Partial_Shape_from_Perspective_Matrix_Using_Simple_Zoom_Lens_Camera_Model" autocomplete="off" /><div class="checkbox"><label><input type="checkbox" name="remember_me" id="remember_me" value="1" checked="checked" /><small style="font-size: 12px; margin-top: 2px; display: inline-block;">Remember me on this computer</small></label></div><br><input type="submit" name="commit" value="Log In" class="btn btn-primary btn-block btn-lg js-login-submit" data-disable-with="Log In" /></br></form><script>typeof window?.Aedu?.recaptchaManagedForm === 'function' && window.Aedu.recaptchaManagedForm( document.querySelector('.js-login-form'), document.querySelector('.js-login-submit') );</script><small style="font-size: 12px;"><br />or <a data-target="#login-modal-reset-password-container" data-toggle="collapse" href="javascript:void(0)">reset password</a></small><div class="collapse" id="login-modal-reset-password-container"><br /><div class="well margin-0x"><form class="js-password-reset-form" action="https://www.academia.edu/reset_password" accept-charset="UTF-8" method="post"><input name="utf8" type="hidden" value="&#x2713;" autocomplete="off" /><input type="hidden" name="authenticity_token" value="BfH102xbqBbdny0wQ6Vh+IKlznMmgXd9vRtgb8AWvPe/0+0VYv+w+xwDvUX9h0AI6y/PvjW67O2yTxGliTdSCw==" autocomplete="off" /><p>Enter the email address you signed up with and we&#39;ll email you a reset link.</p><div class="form-group"><input class="form-control" name="email" type="email" /></div><input class="btn btn-primary btn-block g-recaptcha js-password-reset-submit" data-sitekey="6Lf3KHUUAAAAACggoMpmGJdQDtiyrjVlvGJ6BbAj" type="submit" value="Email me a link" /></form></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/collapse-45805421cf446ca5adf7aaa1935b08a3a8d1d9a6cc5d91a62a2a3a00b20b3e6a.js"], function() { // from javascript_helper.rb $("#login-modal-reset-password-container").on("shown.bs.collapse", function() { $(this).find("input[type=email]").focus(); }); }); </script> </div></div></div><div class="modal-footer"><div class="text-center"><small style="font-size: 12px;">Need an account?&nbsp;<a rel="nofollow" href="https://www.academia.edu/signup">Click here to sign up</a></small></div></div></div></div></div></div><script>// If we are on subdomain or non-bootstrapped page, redirect to login page instead of showing modal (function(){ if (typeof $ === 'undefined') return; var host = window.location.hostname; if ((host === $domain || host === "www."+$domain) && (typeof $().modal === 'function')) { $("#nav_log_in").click(function(e) { // Don't follow the link and open the modal e.preventDefault(); $("#login-modal").on('shown.bs.modal', function() { $(this).find("#login-modal-email-input").focus() }).modal('show'); }); } })()</script> <div id="fb-root"></div><script>window.fbAsyncInit = function() { FB.init({ appId: "2369844204", version: "v8.0", status: true, cookie: true, xfbml: true }); // Additional initialization code. if (window.InitFacebook) { // facebook.ts already loaded, set it up. window.InitFacebook(); } else { // Set a flag for facebook.ts to find when it loads. window.academiaAuthReadyFacebook = true; } };</script> <div id="google-root"></div><script>window.loadGoogle = function() { if (window.InitGoogle) { // google.ts already loaded, set it up. window.InitGoogle("331998490334-rsn3chp12mbkiqhl6e7lu2q0mlbu0f1b"); } else { // Set a flag for google.ts to use when it loads. window.GoogleClientID = "331998490334-rsn3chp12mbkiqhl6e7lu2q0mlbu0f1b"; } };</script> <div class="header--container" id="main-header-container"><div class="header--inner-container header--inner-container-ds2"><div class="header-ds2--left-wrapper"><div class="header-ds2--left-wrapper-inner"><a data-main-header-link-target="logo_home" href="https://www.academia.edu/"><img class="hide-on-desktop-redesign" style="height: 24px; width: 24px;" alt="Academia.edu" src="//a.academia-assets.com/images/academia-logo-redesign-2015-A.svg" width="24" height="24" /><img width="145.2" height="18" class="hide-on-mobile-redesign" style="height: 24px;" alt="Academia.edu" src="//a.academia-assets.com/images/academia-logo-redesign-2015.svg" /></a><div class="header--search-container header--search-container-ds2"><form class="js-SiteSearch-form select2-no-default-pills" action="https://www.academia.edu/search" accept-charset="UTF-8" method="get"><input name="utf8" type="hidden" value="&#x2713;" autocomplete="off" /><svg style="width: 14px; height: 14px;" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="search" class="header--search-icon svg-inline--fa fa-search fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M505 442.7L405.3 343c-4.5-4.5-10.6-7-17-7H372c27.6-35.3 44-79.7 44-128C416 93.1 322.9 0 208 0S0 93.1 0 208s93.1 208 208 208c48.3 0 92.7-16.4 128-44v16.3c0 6.4 2.5 12.5 7 17l99.7 99.7c9.4 9.4 24.6 9.4 33.9 0l28.3-28.3c9.4-9.4 9.4-24.6.1-34zM208 336c-70.7 0-128-57.2-128-128 0-70.7 57.2-128 128-128 70.7 0 128 57.2 128 128 0 70.7-57.2 128-128 128z"></path></svg><input class="header--search-input header--search-input-ds2 js-SiteSearch-form-input" data-main-header-click-target="search_input" name="q" placeholder="Search" type="text" /></form></div></div></div><nav class="header--nav-buttons header--nav-buttons-ds2 js-main-nav"><a class="ds2-5-button ds2-5-button--secondary js-header-login-url header-button-ds2 header-login-ds2 hide-on-mobile-redesign" href="https://www.academia.edu/login" rel="nofollow">Log In</a><a class="ds2-5-button ds2-5-button--secondary header-button-ds2 hide-on-mobile-redesign" href="https://www.academia.edu/signup" rel="nofollow">Sign Up</a><button class="header--hamburger-button header--hamburger-button-ds2 hide-on-desktop-redesign js-header-hamburger-button"><div class="icon-bar"></div><div class="icon-bar" style="margin-top: 4px;"></div><div class="icon-bar" style="margin-top: 4px;"></div></button></nav></div><ul class="header--dropdown-container js-header-dropdown"><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/login" rel="nofollow">Log In</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/signup" rel="nofollow">Sign Up</a></li><li class="header--dropdown-row js-header-dropdown-expand-button"><button class="header--dropdown-button">more<svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="caret-down" class="header--dropdown-button-icon svg-inline--fa fa-caret-down fa-w-10" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 320 512"><path fill="currentColor" d="M31.3 192h257.3c17.8 0 26.7 21.5 14.1 34.1L174.1 354.8c-7.8 7.8-20.5 7.8-28.3 0L17.2 226.1C4.6 213.5 13.5 192 31.3 192z"></path></svg></button></li><li><ul class="header--expanded-dropdown-container"><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/about">About</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/press">Press</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://medium.com/@academia">Blog</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/documents">Papers</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/terms">Terms</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/privacy">Privacy</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/copyright">Copyright</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/hiring"><svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="briefcase" class="header--dropdown-row-icon svg-inline--fa fa-briefcase fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M320 336c0 8.84-7.16 16-16 16h-96c-8.84 0-16-7.16-16-16v-48H0v144c0 25.6 22.4 48 48 48h416c25.6 0 48-22.4 48-48V288H320v48zm144-208h-80V80c0-25.6-22.4-48-48-48H176c-25.6 0-48 22.4-48 48v48H48c-25.6 0-48 22.4-48 48v80h512v-80c0-25.6-22.4-48-48-48zm-144 0H192V96h128v32z"></path></svg>We&#39;re Hiring!</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://support.academia.edu/"><svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="question-circle" class="header--dropdown-row-icon svg-inline--fa fa-question-circle fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zM262.655 90c-54.497 0-89.255 22.957-116.549 63.758-3.536 5.286-2.353 12.415 2.715 16.258l34.699 26.31c5.205 3.947 12.621 3.008 16.665-2.122 17.864-22.658 30.113-35.797 57.303-35.797 20.429 0 45.698 13.148 45.698 32.958 0 14.976-12.363 22.667-32.534 33.976C247.128 238.528 216 254.941 216 296v4c0 6.627 5.373 12 12 12h56c6.627 0 12-5.373 12-12v-1.333c0-28.462 83.186-29.647 83.186-106.667 0-58.002-60.165-102-116.531-102zM256 338c-25.365 0-46 20.635-46 46 0 25.364 20.635 46 46 46s46-20.636 46-46c0-25.365-20.635-46-46-46z"></path></svg>Help Center</a></li><li class="header--dropdown-row js-header-dropdown-collapse-button"><button class="header--dropdown-button">less<svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="caret-up" class="header--dropdown-button-icon svg-inline--fa fa-caret-up fa-w-10" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 320 512"><path fill="currentColor" d="M288.662 352H31.338c-17.818 0-26.741-21.543-14.142-34.142l128.662-128.662c7.81-7.81 20.474-7.81 28.284 0l128.662 128.662c12.6 12.599 3.676 34.142-14.142 34.142z"></path></svg></button></li></ul></li></ul></div> <script src="//a.academia-assets.com/assets/webpack_bundles/fast_loswp-bundle-86832e264efa375cdd3fa5979579c95643bf6a5c1b20c95ec7a98005fcb4b9a4.js" defer="defer"></script><script>window.loswp = {}; window.loswp.author = 219990267; window.loswp.bulkDownloadFilterCounts = {}; window.loswp.hasDownloadableAttachment = true; window.loswp.hasViewableAttachments = true; // TODO: just use routes for this window.loswp.loginUrl = "https://www.academia.edu/login?post_login_redirect_url=https%3A%2F%2Fwww.academia.edu%2F84208092%2FRecovering_Partial_Shape_from_Perspective_Matrix_Using_Simple_Zoom_Lens_Camera_Model%3Fauto%3Ddownload"; window.loswp.translateUrl = "https://www.academia.edu/login?post_login_redirect_url=https%3A%2F%2Fwww.academia.edu%2F84208092%2FRecovering_Partial_Shape_from_Perspective_Matrix_Using_Simple_Zoom_Lens_Camera_Model%3Fshow_translation%3Dtrue"; window.loswp.previewableAttachments = [{"id":89311509,"identifier":"Attachment_89311509","shouldShowBulkDownload":false}]; window.loswp.shouldDetectTimezone = true; window.loswp.shouldShowBulkDownload = true; window.loswp.showSignupCaptcha = false window.loswp.willEdgeCache = false; window.loswp.work = {"work":{"id":84208092,"created_at":"2022-08-05T21:51:28.721-07:00","from_world_paper_id":211881812,"updated_at":"2023-06-13T23:44:33.666-07:00","_data":{"abstract":"In the 3D shape recovering using the perspective matrix, the calibration pattern needed to be contained in the input images. Therefore, it was difficult to use the images using zoom for 3D shape recovering. In this research, we used the simple zoom-lens camera model. And we realized the camera calibration in the state where the calibration pattern is not contained in the input images. Moreover, the validity of this research was shown by performing 3D shape recovering from the images using zoom.","publication_date":"2004,,","publication_name":"Ieej Transactions on Electronics, Information and Systems"},"document_type":"paper","pre_hit_view_count_baseline":null,"quality":"high","language":"ja","title":"Recovering Partial Shape from Perspective Matrix Using Simple Zoom-Lens Camera Model","broadcastable":false,"draft":null,"has_indexable_attachment":true,"indexable":true}}["work"]; window.loswp.workCoauthors = [219990267]; window.loswp.locale = "en"; window.loswp.countryCode = "SG"; window.loswp.cwvAbTestBucket = ""; window.loswp.designVariant = "ds_vanilla"; window.loswp.fullPageMobileSutdModalVariant = "full_page_mobile_sutd_modal"; window.loswp.useOptimizedScribd4genScript = false; window.loswp.appleClientId = 'edu.academia.applesignon';</script><script defer="" src="https://accounts.google.com/gsi/client"></script><div class="ds-loswp-container"><div class="ds-work-card--grid-container"><div class="ds-work-card--container js-loswp-work-card"><div class="ds-work-card--cover"><div class="ds-work-cover--wrapper"><div class="ds-work-cover--container"><button class="ds-work-cover--clickable js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;swp-splash-paper-cover&quot;,&quot;attachmentId&quot;:89311509,&quot;attachmentType&quot;:&quot;pdf&quot;}"><div class="ds-work-cover--blank-cover"><div class="ds-work-cover--blank-cover--title">Recovering Partial Shape from Perspective Matrix Using Simple Zoom-Lens Camera Model</div></div><img alt="PDF Icon" class="ds-work-cover--file-icon" src="//a.academia-assets.com/assets/single_work_splash/adobe.icon-574afd46eb6b03a77a153a647fb47e30546f9215c0ee6a25df597a779717f9ef.svg" /><div class="ds-work-cover--hover-container"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">download</span><p>Download Free PDF</p></div><div class="ds-work-cover--ribbon-container">Download Free PDF</div><div class="ds-work-cover--ribbon-triangle"></div></button></div></div></div><div class="ds-work-card--work-information"><h1 class="ds-work-card--work-title">Recovering Partial Shape from Perspective Matrix Using Simple Zoom-Lens Camera Model</h1><div class="ds-work-card--work-authors ds-work-card--detail"><a class="ds-work-card--author js-wsj-grid-card-author ds2-5-body-md ds2-5-body-link" data-author-id="219990267" href="https://independent.academia.edu/okadayoshihiro"><img alt="Profile image of okada yoshihiro" class="ds-work-card--author-avatar" src="https://0.academia-photos.com/219990267/77980469/66525094/s65_okada.yoshihiro.png" />okada yoshihiro</a></div><div class="ds-work-card--detail"><p class="ds-work-card--detail ds2-5-body-sm">2004, Ieej Transactions on Electronics, Information and Systems</p></div><p class="ds-work-card--work-abstract ds-work-card--detail ds2-5-body-md">In the 3D shape recovering using the perspective matrix, the calibration pattern needed to be contained in the input images. Therefore, it was difficult to use the images using zoom for 3D shape recovering. In this research, we used the simple zoom-lens camera model. And we realized the camera calibration in the state where the calibration pattern is not contained in the input images. Moreover, the validity of this research was shown by performing 3D shape recovering from the images using zoom.</p><div class="ds-work-card--button-container"><button class="ds2-5-button js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;continue-reading-button--work-card&quot;,&quot;attachmentId&quot;:89311509,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;workUrl&quot;:&quot;https://www.academia.edu/84208092/Recovering_Partial_Shape_from_Perspective_Matrix_Using_Simple_Zoom_Lens_Camera_Model&quot;}">See full PDF</button><button class="ds2-5-button ds2-5-button--secondary js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;download-pdf-button--work-card&quot;,&quot;attachmentId&quot;:89311509,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;workUrl&quot;:&quot;https://www.academia.edu/84208092/Recovering_Partial_Shape_from_Perspective_Matrix_Using_Simple_Zoom_Lens_Camera_Model&quot;}"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">download</span>Download PDF</button></div></div></div></div><div data-auto_select="false" data-client_id="331998490334-rsn3chp12mbkiqhl6e7lu2q0mlbu0f1b" data-doc_id="89311509" data-landing_url="https://www.academia.edu/84208092/Recovering_Partial_Shape_from_Perspective_Matrix_Using_Simple_Zoom_Lens_Camera_Model" data-login_uri="https://www.academia.edu/registrations/google_one_tap" data-moment_callback="onGoogleOneTapEvent" id="g_id_onload"></div><div class="ds-top-related-works--grid-container"><div class="ds-related-content--container ds-top-related-works--container"><h2 class="ds-related-content--heading">Related papers</h2><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="0" data-entity-id="117779061" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117779061/Use_of_View_Parameters_in_Free_Viewpoint_Image_Synthesis_to_Configure_a_Multi_View_Acquisition_System_with_Lens_Array">Use of View-Parameters in Free-Viewpoint Image Synthesis to Configure a Multi-View Acquisition System with Lens Array</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Eizō Jōhō Media Gakkaishi, 2006</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Use of View-Parameters in Free-Viewpoint Image Synthesis to Configure a Multi-View Acquisition System with Lens Array&quot;,&quot;attachmentId&quot;:113552987,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/117779061/Use_of_View_Parameters_in_Free_Viewpoint_Image_Synthesis_to_Configure_a_Multi_View_Acquisition_System_with_Lens_Array&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/117779061/Use_of_View_Parameters_in_Free_Viewpoint_Image_Synthesis_to_Configure_a_Multi_View_Acquisition_System_with_Lens_Array"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="1" data-entity-id="117778927" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117778927/Spatial_Domain_Definition_of_Focus_Measurement_Method_for_Light_Field_Rendering_and_Its_Application_for_Images_Captured_with_Unstructured_Array_of_Cameras">Spatial Domain Definition of Focus Measurement Method for Light Field Rendering and Its Application for Images Captured with Unstructured Array of Cameras</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Eizō Jōhō Media Gakkaishi, 2005</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Spatial Domain Definition of Focus Measurement Method for Light Field Rendering and Its Application for Images Captured with Unstructured Array of Cameras&quot;,&quot;attachmentId&quot;:113552847,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/117778927/Spatial_Domain_Definition_of_Focus_Measurement_Method_for_Light_Field_Rendering_and_Its_Application_for_Images_Captured_with_Unstructured_Array_of_Cameras&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/117778927/Spatial_Domain_Definition_of_Focus_Measurement_Method_for_Light_Field_Rendering_and_Its_Application_for_Images_Captured_with_Unstructured_Array_of_Cameras"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="2" data-entity-id="115798207" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/115798207/On_the_Volumetric_Reconstruction_of_Linear_Objects_from_Multiple_Visual_Scenes">On the Volumetric Reconstruction of Linear Objects from Multiple Visual Scenes</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="244236186" href="https://independent.academia.edu/SagaSatoshi">Satoshi Saga</a></div><p class="ds-related-work--metadata ds2-5-body-xs">研究報告コンピュータビジョンとイメージメディア(CVIM), 2013</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;On the Volumetric Reconstruction of Linear Objects from Multiple Visual Scenes&quot;,&quot;attachmentId&quot;:112104472,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/115798207/On_the_Volumetric_Reconstruction_of_Linear_Objects_from_Multiple_Visual_Scenes&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/115798207/On_the_Volumetric_Reconstruction_of_Linear_Objects_from_Multiple_Visual_Scenes"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="3" data-entity-id="117778889" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117778889/Extraction_and_Viewing_Parameter_Control_of_Objects_in_a_3D_TV_System">Extraction and Viewing Parameter Control of Objects in a 3D TV System</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">IEICE Technical Report; IEICE Tech. Rep., 2009</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Extraction and Viewing Parameter Control of Objects in a 3D TV System&quot;,&quot;attachmentId&quot;:113552890,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/117778889/Extraction_and_Viewing_Parameter_Control_of_Objects_in_a_3D_TV_System&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/117778889/Extraction_and_Viewing_Parameter_Control_of_Objects_in_a_3D_TV_System"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="4" data-entity-id="118724393" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/118724393/Simultaneous_Color_Image_and_Depth_Map_Acquisition_with_a_Single_Camera_using_a_Color_Filtered_Aperture">Simultaneous Color Image and Depth Map Acquisition with a Single Camera using a Color-Filtered Aperture</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="148846853" href="https://independent.academia.edu/yusukemoriuchi">yusuke moriuchi</a></div><p class="ds-related-work--metadata ds2-5-body-xs">The Journal of The Institute of Image Information and Television Engineers, 2017</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Simultaneous Color Image and Depth Map Acquisition with a Single Camera using a Color-Filtered Aperture&quot;,&quot;attachmentId&quot;:114284028,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/118724393/Simultaneous_Color_Image_and_Depth_Map_Acquisition_with_a_Single_Camera_using_a_Color_Filtered_Aperture&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/118724393/Simultaneous_Color_Image_and_Depth_Map_Acquisition_with_a_Single_Camera_using_a_Color_Filtered_Aperture"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="5" data-entity-id="113109815" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/113109815/Development_of_an_Equation_for_Predicting_Body_Surface_Area_Based_on_Three_Dimensional_Photonic_Image_Scanning">Development of an Equation for Predicting Body Surface Area Based on Three-Dimensional Photonic Image Scanning</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="184977936" href="https://independent.academia.edu/YasuoKawakami">Yasuo Kawakami</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Japanese Journal of Physical Fitness and Sports Medicine, 2009</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Development of an Equation for Predicting Body Surface Area Based on Three-Dimensional Photonic Image Scanning&quot;,&quot;attachmentId&quot;:110156424,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/113109815/Development_of_an_Equation_for_Predicting_Body_Surface_Area_Based_on_Three_Dimensional_Photonic_Image_Scanning&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/113109815/Development_of_an_Equation_for_Predicting_Body_Surface_Area_Based_on_Three_Dimensional_Photonic_Image_Scanning"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="6" data-entity-id="78896053" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/78896053/Scale_and_Rotaion_Invariant_Pattern_Detection_Using_Phase_Information">Scale and Rotaion Invariant Pattern Detection Using Phase Information</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="22941992" href="https://independent.academia.edu/SigeruOmatu">Sigeru Omatu</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Ieej Transactions on Electronics Information and Systems, 2004</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Scale and Rotaion Invariant Pattern Detection Using Phase Information&quot;,&quot;attachmentId&quot;:85781087,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/78896053/Scale_and_Rotaion_Invariant_Pattern_Detection_Using_Phase_Information&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/78896053/Scale_and_Rotaion_Invariant_Pattern_Detection_Using_Phase_Information"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="7" data-entity-id="117779026" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117779026/Real_Time_Free_Viewpoint_Image_Synthesis_Using_Multi_View_Images_and_on_the_Fly_Estimation_of_View_Dependent_Depth_Map">Real-Time Free-Viewpoint Image Synthesis Using Multi-View Images and on-the-Fly Estimation of View-Dependent Depth Map</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Eizō Jōhō Media Gakkaishi, 2006</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Real-Time Free-Viewpoint Image Synthesis Using Multi-View Images and on-the-Fly Estimation of View-Dependent Depth Map&quot;,&quot;attachmentId&quot;:113552981,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/117779026/Real_Time_Free_Viewpoint_Image_Synthesis_Using_Multi_View_Images_and_on_the_Fly_Estimation_of_View_Dependent_Depth_Map&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/117779026/Real_Time_Free_Viewpoint_Image_Synthesis_Using_Multi_View_Images_and_on_the_Fly_Estimation_of_View_Dependent_Depth_Map"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="8" data-entity-id="52228567" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/52228567/Studies_on_Range_Image_Segmentation_using_Curvature_Signs">Studies on Range Image Segmentation using Curvature Signs</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="126789624" href="https://independent.academia.edu/HitoshiWakizako">Hitoshi Wakizako</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Journal of the Robotics Society of Japan</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Studies on Range Image Segmentation using Curvature Signs&quot;,&quot;attachmentId&quot;:69590922,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/52228567/Studies_on_Range_Image_Segmentation_using_Curvature_Signs&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/52228567/Studies_on_Range_Image_Segmentation_using_Curvature_Signs"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="9" data-entity-id="117778933" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117778933/Special_Issue_Image_Technology_of_Next_Generation_Self_Similarity_Modeling_for_Interpolation_and_Data_Compression_of_a_Multi_View_3_D_Image">Special Issue Image Technology of Next Generation. Self-Similarity Modeling for Interpolation and Data Compression of a Multi-View 3-D Image</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">The Journal of the Institute of Television Engineers of Japan, 1994</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Special Issue Image Technology of Next Generation. Self-Similarity Modeling for Interpolation and Data Compression of a Multi-View 3-D Image&quot;,&quot;attachmentId&quot;:113552918,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/117778933/Special_Issue_Image_Technology_of_Next_Generation_Self_Similarity_Modeling_for_Interpolation_and_Data_Compression_of_a_Multi_View_3_D_Image&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/117778933/Special_Issue_Image_Technology_of_Next_Generation_Self_Similarity_Modeling_for_Interpolation_and_Data_Compression_of_a_Multi_View_3_D_Image"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div></div></div><div class="ds-sticky-ctas--wrapper js-loswp-sticky-ctas hidden"><div class="ds-sticky-ctas--grid-container"><div class="ds-sticky-ctas--container"><button class="ds2-5-button js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;continue-reading-button--sticky-ctas&quot;,&quot;attachmentId&quot;:89311509,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;workUrl&quot;:null}">See full PDF</button><button class="ds2-5-button ds2-5-button--secondary js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;download-pdf-button--sticky-ctas&quot;,&quot;attachmentId&quot;:89311509,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;workUrl&quot;:null}"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">download</span>Download PDF</button></div></div></div><div class="ds-below-fold--grid-container"><div class="ds-work--container js-loswp-embedded-document"><div class="attachment_preview" data-attachment="Attachment_89311509" style="display: none"><div class="js-scribd-document-container"><div class="scribd--document-loading js-scribd-document-loader" style="display: block;"><img alt="Loading..." src="//a.academia-assets.com/images/loaders/paper-load.gif" /><p>Loading Preview</p></div></div><div style="text-align: center;"><div class="scribd--no-preview-alert js-preview-unavailable"><p>Sorry, preview is currently unavailable. You can download the paper by clicking the button above.</p></div></div></div></div><div class="ds-sidebar--container js-work-sidebar"><div class="ds-related-content--container"><h2 class="ds-related-content--heading">Related papers</h2><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="0" data-entity-id="85155860" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/85155860/Application_of_Fisheye_View_to_a_curvilinear_focus">Application of Fisheye View to a curvilinear focus</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="4196738" href="https://tamagawa.academia.edu/HidekazuShiozawa">Hidekazu Shiozawa</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2002</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Application of Fisheye View to a curvilinear focus&quot;,&quot;attachmentId&quot;:89941713,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/85155860/Application_of_Fisheye_View_to_a_curvilinear_focus&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/85155860/Application_of_Fisheye_View_to_a_curvilinear_focus"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="1" data-entity-id="93361344" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/93361344/Reconstruction_of_Scene_from_Multiple_Sketches">Reconstruction of Scene from Multiple Sketches</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="211518613" href="https://independent.academia.edu/YasuyukiSumi">Yasuyuki Sumi</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Proceedings of the 29th Annual Symposium on User Interface Software and Technology, 2016</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Reconstruction of Scene from Multiple Sketches&quot;,&quot;attachmentId&quot;:96119688,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/93361344/Reconstruction_of_Scene_from_Multiple_Sketches&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/93361344/Reconstruction_of_Scene_from_Multiple_Sketches"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="2" data-entity-id="108133180" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/108133180/Robust_Picture_Matching_Using_Optimum_Selection_of_Partial_Templates">Robust Picture Matching Using Optimum Selection of Partial Templates</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="59598963" href="https://independent.academia.edu/HaruhisaOkuda">Haruhisa Okuda</a></div><p class="ds-related-work--metadata ds2-5-body-xs">IEEJ Transactions on Electronics, Information and Systems, 2004</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Robust Picture Matching Using Optimum Selection of Partial Templates&quot;,&quot;attachmentId&quot;:106598098,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/108133180/Robust_Picture_Matching_Using_Optimum_Selection_of_Partial_Templates&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/108133180/Robust_Picture_Matching_Using_Optimum_Selection_of_Partial_Templates"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="3" data-entity-id="100430363" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/100430363/Shape_Rectification_of_Building_Contour_for_Automatic_Generation_of_3D_Building_Models">Shape Rectification of Building Contour for Automatic Generation of 3D Building Models</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="184640410" href="https://independent.academia.edu/SugiharaKenichi">Kenichi Sugihara</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Journal of Japan Society of Civil Engineers, Ser. F3 (Civil Engineering Informatics), 2016</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Shape Rectification of Building Contour for Automatic Generation of 3D Building Models&quot;,&quot;attachmentId&quot;:101256680,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/100430363/Shape_Rectification_of_Building_Contour_for_Automatic_Generation_of_3D_Building_Models&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/100430363/Shape_Rectification_of_Building_Contour_for_Automatic_Generation_of_3D_Building_Models"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="4" data-entity-id="75352565" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/75352565/Information_Retrieval_and_Visualization_of_Massive_Database_using_Dimensional_Reduction_based_on_Vector_Space_Model">Information Retrieval and Visualization of Massive Database using Dimensional Reduction based on Vector Space Model</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="4945594" href="https://independent.academia.edu/MeiKobayashi">Mei Kobayashi</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2002</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Information Retrieval and Visualization of Massive Database using Dimensional Reduction based on Vector Space Model&quot;,&quot;attachmentId&quot;:83154498,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/75352565/Information_Retrieval_and_Visualization_of_Massive_Database_using_Dimensional_Reduction_based_on_Vector_Space_Model&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/75352565/Information_Retrieval_and_Visualization_of_Massive_Database_using_Dimensional_Reduction_based_on_Vector_Space_Model"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="5" data-entity-id="117395859" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117395859/Effect_of_Singular_Value_Decomposition_and_Weighting_by_Singular_Value_of_Document_Term_Matrix_for_Large_scale_Data_Perspective_and_Targeted_Data_Extraction">Effect of Singular Value Decomposition and Weighting by Singular Value of Document-Term Matrix, for Large-scale Data Perspective and Targeted Data Extraction</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="293889218" href="https://independent.academia.edu/TakeshiKobayakawa">Takeshi Kobayakawa</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Journal of Natural Language Processing, 2013</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Effect of Singular Value Decomposition and Weighting by Singular Value of Document-Term Matrix, for Large-scale Data Perspective and Targeted Data Extraction&quot;,&quot;attachmentId&quot;:113263748,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/117395859/Effect_of_Singular_Value_Decomposition_and_Weighting_by_Singular_Value_of_Document_Term_Matrix_for_Large_scale_Data_Perspective_and_Targeted_Data_Extraction&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/117395859/Effect_of_Singular_Value_Decomposition_and_Weighting_by_Singular_Value_of_Document_Term_Matrix_for_Large_scale_Data_Perspective_and_Targeted_Data_Extraction"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="6" data-entity-id="45265554" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/45265554/Motion_Parameter_Estimation_by_Using_Spatially_Localized_Reticles_and_Its_Application_to_Real_Time_Object_Tracking">Motion Parameter Estimation by Using Spatially Localized Reticles and Its Application to Real-Time Object Tracking</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="97302135" href="https://independent.academia.edu/WMitsuhashi">Wataru Mitsuhashi</a></div><p class="ds-related-work--metadata ds2-5-body-xs">IEEJ Transactions on Electronics Information and Systems</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Motion Parameter Estimation by Using Spatially Localized Reticles and Its Application to Real-Time Object Tracking&quot;,&quot;attachmentId&quot;:65828105,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/45265554/Motion_Parameter_Estimation_by_Using_Spatially_Localized_Reticles_and_Its_Application_to_Real_Time_Object_Tracking&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/45265554/Motion_Parameter_Estimation_by_Using_Spatially_Localized_Reticles_and_Its_Application_to_Real_Time_Object_Tracking"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="7" data-entity-id="117779124" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117779124/Acquisition_of_light_rays_using_telecentric_lens">Acquisition of light rays using telecentric lens</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2003</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Acquisition of light rays using telecentric lens&quot;,&quot;attachmentId&quot;:113553002,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/117779124/Acquisition_of_light_rays_using_telecentric_lens&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/117779124/Acquisition_of_light_rays_using_telecentric_lens"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="8" data-entity-id="117778917" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117778917/Three_Dimensional_Image_Information_Media_Telecentric_Capturing_System_for_Acquiring_Light_Ray_Data_of_3_D_Objects">Three-Dimensional Image Information Media. Telecentric Capturing System for Acquiring Light Ray Data of 3-D Objects</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Eizō Jōhō Media Gakkaishi, 2002</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Three-Dimensional Image Information Media. Telecentric Capturing System for Acquiring Light Ray Data of 3-D Objects&quot;,&quot;attachmentId&quot;:113552910,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/117778917/Three_Dimensional_Image_Information_Media_Telecentric_Capturing_System_for_Acquiring_Light_Ray_Data_of_3_D_Objects&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/117778917/Three_Dimensional_Image_Information_Media_Telecentric_Capturing_System_for_Acquiring_Light_Ray_Data_of_3_D_Objects"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="9" data-entity-id="92618477" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/92618477/Simultaneous_Estimation_of_Facial_Pose_and_Expression_Using_Variable_intensity_Template">Simultaneous Estimation of Facial Pose and Expression Using Variable-intensity Template</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="161609982" href="https://independent.academia.edu/EisakuMaeda">Eisaku Maeda</a></div><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Simultaneous Estimation of Facial Pose and Expression Using Variable-intensity Template&quot;,&quot;attachmentId&quot;:95579609,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/92618477/Simultaneous_Estimation_of_Facial_Pose_and_Expression_Using_Variable_intensity_Template&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/92618477/Simultaneous_Estimation_of_Facial_Pose_and_Expression_Using_Variable_intensity_Template"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="10" data-entity-id="93516828" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/93516828/Multiresolution_Image_Restoration_Based_on_Simulated_Annealing_and_Wavelet_Representation">Multiresolution Image Restoration Based on Simulated Annealing and Wavelet Representation</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="207989249" href="https://independent.academia.edu/SueoSugimoto">Sueo Sugimoto</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Proceedings of the ISCIE International Symposium on Stochastic Systems Theory and its Applications, 1995</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Multiresolution Image Restoration Based on Simulated Annealing and Wavelet Representation&quot;,&quot;attachmentId&quot;:96233507,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/93516828/Multiresolution_Image_Restoration_Based_on_Simulated_Annealing_and_Wavelet_Representation&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/93516828/Multiresolution_Image_Restoration_Based_on_Simulated_Annealing_and_Wavelet_Representation"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="11" data-entity-id="85631955" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/85631955/Background_Image_Generation_by_Analyzing_Long_Term_Images_for_Outdoor_Fixed_Camera">Background Image Generation by Analyzing Long-Term Images for Outdoor Fixed Camera</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="948400" href="https://jp.academia.edu/YasutomoKawanishi">Yasutomo Kawanishi</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2011</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Background Image Generation by Analyzing Long-Term Images for Outdoor Fixed Camera&quot;,&quot;attachmentId&quot;:90270493,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/85631955/Background_Image_Generation_by_Analyzing_Long_Term_Images_for_Outdoor_Fixed_Camera&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/85631955/Background_Image_Generation_by_Analyzing_Long_Term_Images_for_Outdoor_Fixed_Camera"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="12" data-entity-id="118035215" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/118035215/Three_Dimensional_Image_Depth_Perception_Distortion_in_a_Stereoscopic_Display_and_Its_Correction">Three-Dimensional Image. Depth Perception Distortion in a Stereoscopic Display and Its Correction</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="279199289" href="https://kanazawa-it.academia.edu/SakuichiOhtsuka">Sakuichi Ohtsuka</a></div><p class="ds-related-work--metadata ds2-5-body-xs">The Journal of the Institute of Television Engineers of Japan, 1996</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Three-Dimensional Image. Depth Perception Distortion in a Stereoscopic Display and Its Correction&quot;,&quot;attachmentId&quot;:113754688,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/118035215/Three_Dimensional_Image_Depth_Perception_Distortion_in_a_Stereoscopic_Display_and_Its_Correction&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/118035215/Three_Dimensional_Image_Depth_Perception_Distortion_in_a_Stereoscopic_Display_and_Its_Correction"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="13" data-entity-id="118506976" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/118506976/Identifying_Differences_Between_a_Straight_Face_and_a_Posed_Smile_Using_the_Homologous_Modeling_Technique_and_the_Principal_Component_Analysis">Identifying Differences Between a Straight Face and a Posed Smile Using the Homologous Modeling Technique and the Principal Component Analysis</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="228611581" href="https://independent.academia.edu/ShihoTajiri">Shiho Tajiri</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Journal of Craniofacial Surgery, 2019</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Identifying Differences Between a Straight Face and a Posed Smile Using the Homologous Modeling Technique and the Principal Component Analysis&quot;,&quot;attachmentId&quot;:114116501,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/118506976/Identifying_Differences_Between_a_Straight_Face_and_a_Posed_Smile_Using_the_Homologous_Modeling_Technique_and_the_Principal_Component_Analysis&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/118506976/Identifying_Differences_Between_a_Straight_Face_and_a_Posed_Smile_Using_the_Homologous_Modeling_Technique_and_the_Principal_Component_Analysis"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="14" data-entity-id="118035487" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/118035487/Sensing_of_Feeling_and_Kansei_A_Neural_Network_Approach_for_Classifying_Eye_Shape_by_Features_Obtained_from_the_Subjective_Evaluation_Standard">Sensing of Feeling and &quot;Kansei&quot;. A Neural Network Approach for Classifying Eye Shape by Features Obtained from the Subjective Evaluation Standard</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="279199289" href="https://kanazawa-it.academia.edu/SakuichiOhtsuka">Sakuichi Ohtsuka</a></div><p class="ds-related-work--metadata ds2-5-body-xs">The Journal of the Institute of Television Engineers of Japan, 1995</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;Sensing of Feeling and \&quot;Kansei\&quot;. A Neural Network Approach for Classifying Eye Shape by Features Obtained from the Subjective Evaluation Standard&quot;,&quot;attachmentId&quot;:113754814,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/118035487/Sensing_of_Feeling_and_Kansei_A_Neural_Network_Approach_for_Classifying_Eye_Shape_by_Features_Obtained_from_the_Subjective_Evaluation_Standard&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/118035487/Sensing_of_Feeling_and_Kansei_A_Neural_Network_Approach_for_Classifying_Eye_Shape_by_Features_Obtained_from_the_Subjective_Evaluation_Standard"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="15" data-entity-id="78092047" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/78092047/%E5%85%89%E5%AD%A63%E6%AC%A1%E5%85%83%E4%BA%BA%E4%BD%93%E5%BD%A2%E7%8A%B6%E8%A8%88%E6%B8%AC%E6%B3%95%E3%81%AB%E5%9F%BA%E3%81%A5%E3%81%8F%E4%BD%93%E8%A1%A8%E9%9D%A2%E7%A9%8D%E3%81%AE%E6%8E%A8%E5%AE%9A%E5%BC%8F%E3%81%AE%E9%96%8B%E7%99%BA">光学3次元人体形状計測法に基づく体表面積の推定式の開発</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="220371008" href="https://independent.academia.edu/TakaiYohei">Yohei Takai</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Japanese Journal of Physical Fitness and Sports Medicine, 2009</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{&quot;location&quot;:&quot;wsj-grid-card-download-pdf-modal&quot;,&quot;work_title&quot;:&quot;光学3次元人体形状計測法に基づく体表面積の推定式の開発&quot;,&quot;attachmentId&quot;:85260530,&quot;attachmentType&quot;:&quot;pdf&quot;,&quot;work_url&quot;:&quot;https://www.academia.edu/78092047/%E5%85%89%E5%AD%A63%E6%AC%A1%E5%85%83%E4%BA%BA%E4%BD%93%E5%BD%A2%E7%8A%B6%E8%A8%88%E6%B8%AC%E6%B3%95%E3%81%AB%E5%9F%BA%E3%81%A5%E3%81%8F%E4%BD%93%E8%A1%A8%E9%9D%A2%E7%A9%8D%E3%81%AE%E6%8E%A8%E5%AE%9A%E5%BC%8F%E3%81%AE%E9%96%8B%E7%99%BA&quot;,&quot;alternativeTracking&quot;:true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/78092047/%E5%85%89%E5%AD%A63%E6%AC%A1%E5%85%83%E4%BA%BA%E4%BD%93%E5%BD%A2%E7%8A%B6%E8%A8%88%E6%B8%AC%E6%B3%95%E3%81%AB%E5%9F%BA%E3%81%A5%E3%81%8F%E4%BD%93%E8%A1%A8%E9%9D%A2%E7%A9%8D%E3%81%AE%E6%8E%A8%E5%AE%9A%E5%BC%8F%E3%81%AE%E9%96%8B%E7%99%BA"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div></div><div class="ds-related-content--container"><h2 class="ds-related-content--heading">Related topics</h2><div class="ds-research-interests--pills-container"><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="422" href="https://www.academia.edu/Documents/in/Computer_Science">Computer Science</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="465" href="https://www.academia.edu/Documents/in/Artificial_Intelligence">Artificial Intelligence</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="854" href="https://www.academia.edu/Documents/in/Computer_Vision">Computer Vision</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="99818" href="https://www.academia.edu/Documents/in/Camera_Calibration">Camera Calibration</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="830930" href="https://www.academia.edu/Documents/in/Zoom">Zoom</a></div></div></div></div></div><div class="footer--content"><ul class="footer--main-links hide-on-mobile"><li><a href="https://www.academia.edu/about">About</a></li><li><a href="https://www.academia.edu/press">Press</a></li><li><a rel="nofollow" href="https://medium.com/academia">Blog</a></li><li><a href="https://www.academia.edu/documents">Papers</a></li><li><a href="https://www.academia.edu/topics">Topics</a></li><li><a href="https://www.academia.edu/hiring"><svg style="width: 13px; height: 13px; position: relative; bottom: -1px;" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="briefcase" class="svg-inline--fa fa-briefcase fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M320 336c0 8.84-7.16 16-16 16h-96c-8.84 0-16-7.16-16-16v-48H0v144c0 25.6 22.4 48 48 48h416c25.6 0 48-22.4 48-48V288H320v48zm144-208h-80V80c0-25.6-22.4-48-48-48H176c-25.6 0-48 22.4-48 48v48H48c-25.6 0-48 22.4-48 48v80h512v-80c0-25.6-22.4-48-48-48zm-144 0H192V96h128v32z"></path></svg>&nbsp;<strong>We&#39;re Hiring!</strong></a></li><li><a href="https://support.academia.edu/"><svg style="width: 12px; height: 12px; position: relative; bottom: -1px;" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="question-circle" class="svg-inline--fa fa-question-circle fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zM262.655 90c-54.497 0-89.255 22.957-116.549 63.758-3.536 5.286-2.353 12.415 2.715 16.258l34.699 26.31c5.205 3.947 12.621 3.008 16.665-2.122 17.864-22.658 30.113-35.797 57.303-35.797 20.429 0 45.698 13.148 45.698 32.958 0 14.976-12.363 22.667-32.534 33.976C247.128 238.528 216 254.941 216 296v4c0 6.627 5.373 12 12 12h56c6.627 0 12-5.373 12-12v-1.333c0-28.462 83.186-29.647 83.186-106.667 0-58.002-60.165-102-116.531-102zM256 338c-25.365 0-46 20.635-46 46 0 25.364 20.635 46 46 46s46-20.636 46-46c0-25.365-20.635-46-46-46z"></path></svg>&nbsp;<strong>Help Center</strong></a></li></ul><ul class="footer--research-interests"><li>Find new research papers in:</li><li><a href="https://www.academia.edu/Documents/in/Physics">Physics</a></li><li><a href="https://www.academia.edu/Documents/in/Chemistry">Chemistry</a></li><li><a href="https://www.academia.edu/Documents/in/Biology">Biology</a></li><li><a href="https://www.academia.edu/Documents/in/Health_Sciences">Health Sciences</a></li><li><a href="https://www.academia.edu/Documents/in/Ecology">Ecology</a></li><li><a href="https://www.academia.edu/Documents/in/Earth_Sciences">Earth Sciences</a></li><li><a href="https://www.academia.edu/Documents/in/Cognitive_Science">Cognitive Science</a></li><li><a href="https://www.academia.edu/Documents/in/Mathematics">Mathematics</a></li><li><a href="https://www.academia.edu/Documents/in/Computer_Science">Computer Science</a></li></ul><ul class="footer--legal-links hide-on-mobile"><li><a href="https://www.academia.edu/terms">Terms</a></li><li><a href="https://www.academia.edu/privacy">Privacy</a></li><li><a href="https://www.academia.edu/copyright">Copyright</a></li><li>Academia &copy;2024</li></ul></div> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10