CINXE.COM
(PDF) Use of View-Parameters in Free-Viewpoint Image Synthesis to Configure a Multi-View Acquisition System with Lens Array | Takeshi Naemura - Academia.edu
<!DOCTYPE html> <html > <head> <meta charset="utf-8"> <meta rel="search" type="application/opensearchdescription+xml" href="/open_search.xml" title="Academia.edu"> <meta content="width=device-width, initial-scale=1" name="viewport"> <meta name="google-site-verification" content="bKJMBZA7E43xhDOopFZkssMMkBRjvYERV-NaN4R6mrs"> <meta name="csrf-param" content="authenticity_token" /> <meta name="csrf-token" content="7Y8lumz/luUNSnfICOsdOJ1zDkocfruWSkjGuZ5st76fw/ZNGn/5o4g2JC49RPnDRiwhSFJcbPpnig1aSRk5vw==" /> <meta name="citation_title" content="Use of View-Parameters in Free-Viewpoint Image Synthesis to Configure a Multi-View Acquisition System with Lens Array" /> <meta name="citation_publication_date" content="2006/01/01" /> <meta name="citation_journal_title" content="Eizō Jōhō Media Gakkaishi" /> <meta name="citation_author" content="Takeshi Naemura" /> <meta name="twitter:card" content="summary" /> <meta name="twitter:url" content="https://www.academia.edu/117779061/Use_of_View_Parameters_in_Free_Viewpoint_Image_Synthesis_to_Configure_a_Multi_View_Acquisition_System_with_Lens_Array" /> <meta name="twitter:title" content="Use of View-Parameters in Free-Viewpoint Image Synthesis to Configure a Multi-View Acquisition System with Lens Array" /> <meta name="twitter:description" content="Academia.edu is a platform for academics to share research papers." /> <meta name="twitter:image" content="http://a.academia-assets.com/images/twitter-card.jpeg" /> <meta property="fb:app_id" content="2369844204" /> <meta property="og:type" content="article" /> <meta property="og:url" content="https://www.academia.edu/117779061/Use_of_View_Parameters_in_Free_Viewpoint_Image_Synthesis_to_Configure_a_Multi_View_Acquisition_System_with_Lens_Array" /> <meta property="og:title" content="Use of View-Parameters in Free-Viewpoint Image Synthesis to Configure a Multi-View Acquisition System with Lens Array" /> <meta property="og:image" content="http://a.academia-assets.com/images/open-graph-icons/fb-paper.gif" /> <meta property="og:description" content="Use of View-Parameters in Free-Viewpoint Image Synthesis to Configure a Multi-View Acquisition System with Lens Array" /> <meta property="article:author" content="https://u-tokyo.academia.edu/TakeshiNaemura" /> <meta name="description" content="Use of View-Parameters in Free-Viewpoint Image Synthesis to Configure a Multi-View Acquisition System with Lens Array" /> <title>(PDF) Use of View-Parameters in Free-Viewpoint Image Synthesis to Configure a Multi-View Acquisition System with Lens Array | Takeshi Naemura - Academia.edu</title> <link rel="canonical" href="https://www.academia.edu/117779061/Use_of_View_Parameters_in_Free_Viewpoint_Image_Synthesis_to_Configure_a_Multi_View_Acquisition_System_with_Lens_Array" /> <script async src="https://www.googletagmanager.com/gtag/js?id=G-5VKX33P2DS"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-5VKX33P2DS', { cookie_domain: 'academia.edu', send_page_view: false, }); gtag('event', 'page_view', { 'controller': "single_work", 'action': "show", 'controller_action': 'single_work#show', 'logged_in': 'false', 'edge': 'unknown', // Send nil if there is no A/B test bucket, in case some records get logged // with missing data - that way we can distinguish between the two cases. // ab_test_bucket should be of the form <ab_test_name>:<bucket> 'ab_test_bucket': null, }) </script> <script> var $controller_name = 'single_work'; var $action_name = "show"; var $rails_env = 'production'; var $app_rev = 'c7ac46e400875c3b13c788ad246730fe5f6b36cc'; var $domain = 'academia.edu'; var $app_host = "academia.edu"; var $asset_host = "academia-assets.com"; var $start_time = new Date().getTime(); var $recaptcha_key = "6LdxlRMTAAAAADnu_zyLhLg0YF9uACwz78shpjJB"; var $recaptcha_invisible_key = "6Lf3KHUUAAAAACggoMpmGJdQDtiyrjVlvGJ6BbAj"; var $disableClientRecordHit = false; </script> <script> window.require = { config: function() { return function() {} } } </script> <script> window.Aedu = window.Aedu || {}; window.Aedu.hit_data = null; window.Aedu.serverRenderTime = new Date(1732754284000); window.Aedu.timeDifference = new Date().getTime() - 1732754284000; </script> <script type="application/ld+json">{"@context":"https://schema.org","@type":"ScholarlyArticle","abstract":null,"author":[{"@context":"https://schema.org","@type":"Person","name":"Takeshi Naemura"}],"contributor":[],"dateCreated":"2024-04-20","dateModified":null,"datePublished":"2006-01-01","headline":"Use of View-Parameters in Free-Viewpoint Image Synthesis to Configure a Multi-View Acquisition System with Lens Array","inLanguage":"ja","keywords":["Computer Science","Artificial Intelligence","Computer Vision","Rendering (Computer Graphics)","Image Analysis (Mathematics)","Image Synthesis","Limit (Mathematics)"],"locationCreated":null,"publication":"Eizō Jōhō Media Gakkaishi","publisher":{"@context":"https://schema.org","@type":"Organization","name":"The Institute of Image Information and Television Engineers"},"image":null,"thumbnailUrl":null,"url":"https://www.academia.edu/117779061/Use_of_View_Parameters_in_Free_Viewpoint_Image_Synthesis_to_Configure_a_Multi_View_Acquisition_System_with_Lens_Array","sourceOrganization":[{"@context":"https://schema.org","@type":"EducationalOrganization","name":"u-tokyo"}]}</script><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/single_work_page/loswp-102fa537001ba4d8dcd921ad9bd56c474abc201906ea4843e7e7efe9dfbf561d.css" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/body-8d679e925718b5e8e4b18e9a4fab37f7eaa99e43386459376559080ac8f2856a.css" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/button-3cea6e0ad4715ed965c49bfb15dedfc632787b32ff6d8c3a474182b231146ab7.css" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/text_button-73590134e40cdb49f9abdc8e796cc00dc362693f3f0f6137d6cf9bb78c318ce7.css" /><link crossorigin="" href="https://fonts.gstatic.com/" rel="preconnect" /><link href="https://fonts.googleapis.com/css2?family=DM+Sans:ital,opsz,wght@0,9..40,100..1000;1,9..40,100..1000&family=Gupter:wght@400;500;700&family=IBM+Plex+Mono:wght@300;400&family=Material+Symbols+Outlined:opsz,wght,FILL,GRAD@20,400,0,0&display=swap" rel="stylesheet" /><link rel="stylesheet" media="all" href="//a.academia-assets.com/assets/design_system/common-10fa40af19d25203774df2d4a03b9b5771b45109c2304968038e88a81d1215c5.css" /> </head> <body> <div id='react-modal'></div> <div class="js-upgrade-ie-banner" style="display: none; text-align: center; padding: 8px 0; background-color: #ebe480;"><p style="color: #000; font-size: 12px; margin: 0 0 4px;">Academia.edu no longer supports Internet Explorer.</p><p style="color: #000; font-size: 12px; margin: 0;">To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to <a href="https://www.academia.edu/upgrade-browser">upgrade your browser</a>.</p></div><script>// Show this banner for all versions of IE if (!!window.MSInputMethodContext || /(MSIE)/.test(navigator.userAgent)) { document.querySelector('.js-upgrade-ie-banner').style.display = 'block'; }</script> <div class="bootstrap login"><div class="modal fade login-modal" id="login-modal"><div class="login-modal-dialog modal-dialog"><div class="modal-content"><div class="modal-header"><button class="close close" data-dismiss="modal" type="button"><span aria-hidden="true">×</span><span class="sr-only">Close</span></button><h4 class="modal-title text-center"><strong>Log In</strong></h4></div><div class="modal-body"><div class="row"><div class="col-xs-10 col-xs-offset-1"><button class="btn btn-fb btn-lg btn-block btn-v-center-content" id="login-facebook-oauth-button"><svg style="float: left; width: 19px; line-height: 1em; margin-right: .3em;" aria-hidden="true" focusable="false" data-prefix="fab" data-icon="facebook-square" class="svg-inline--fa fa-facebook-square fa-w-14" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 448 512"><path fill="currentColor" d="M400 32H48A48 48 0 0 0 0 80v352a48 48 0 0 0 48 48h137.25V327.69h-63V256h63v-54.64c0-62.15 37-96.48 93.67-96.48 27.14 0 55.52 4.84 55.52 4.84v61h-31.27c-30.81 0-40.42 19.12-40.42 38.73V256h68.78l-11 71.69h-57.78V480H400a48 48 0 0 0 48-48V80a48 48 0 0 0-48-48z"></path></svg><small><strong>Log in</strong> with <strong>Facebook</strong></small></button><br /><button class="btn btn-google btn-lg btn-block btn-v-center-content" id="login-google-oauth-button"><svg style="float: left; width: 22px; line-height: 1em; margin-right: .3em;" aria-hidden="true" focusable="false" data-prefix="fab" data-icon="google-plus" class="svg-inline--fa fa-google-plus fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M256,8C119.1,8,8,119.1,8,256S119.1,504,256,504,504,392.9,504,256,392.9,8,256,8ZM185.3,380a124,124,0,0,1,0-248c31.3,0,60.1,11,83,32.3l-33.6,32.6c-13.2-12.9-31.3-19.1-49.4-19.1-42.9,0-77.2,35.5-77.2,78.1S142.3,334,185.3,334c32.6,0,64.9-19.1,70.1-53.3H185.3V238.1H302.2a109.2,109.2,0,0,1,1.9,20.7c0,70.8-47.5,121.2-118.8,121.2ZM415.5,273.8v35.5H380V273.8H344.5V238.3H380V202.8h35.5v35.5h35.2v35.5Z"></path></svg><small><strong>Log in</strong> with <strong>Google</strong></small></button><br /><style type="text/css">.sign-in-with-apple-button { width: 100%; height: 52px; border-radius: 3px; border: 1px solid black; cursor: pointer; }</style><script src="https://appleid.cdn-apple.com/appleauth/static/jsapi/appleid/1/en_US/appleid.auth.js" type="text/javascript"></script><div class="sign-in-with-apple-button" data-border="false" data-color="white" id="appleid-signin"><span ="Sign Up with Apple" class="u-fs11"></span></div><script>AppleID.auth.init({ clientId: 'edu.academia.applesignon', scope: 'name email', redirectURI: 'https://www.academia.edu/sessions', state: "51d4a44d71292b2d39ce791e144523810f402d513063e53b7e3f489457d5d8ee", });</script><script>// Hacky way of checking if on fast loswp if (window.loswp == null) { (function() { const Google = window?.Aedu?.Auth?.OauthButton?.Login?.Google; const Facebook = window?.Aedu?.Auth?.OauthButton?.Login?.Facebook; if (Google) { new Google({ el: '#login-google-oauth-button', rememberMeCheckboxId: 'remember_me', track: null }); } if (Facebook) { new Facebook({ el: '#login-facebook-oauth-button', rememberMeCheckboxId: 'remember_me', track: null }); } })(); }</script></div></div></div><div class="modal-body"><div class="row"><div class="col-xs-10 col-xs-offset-1"><div class="hr-heading login-hr-heading"><span class="hr-heading-text">or</span></div></div></div></div><div class="modal-body"><div class="row"><div class="col-xs-10 col-xs-offset-1"><form class="js-login-form" action="https://www.academia.edu/sessions" accept-charset="UTF-8" method="post"><input name="utf8" type="hidden" value="✓" autocomplete="off" /><input type="hidden" name="authenticity_token" value="AjAGkBCf0rRPAsQBSy8KpStGiRV6i3R79jBqLvKeFwFwfNVnZh+98sp+l+d+gO5e8BmmFzSpoxfb8qHNJeuZAA==" autocomplete="off" /><div class="form-group"><label class="control-label" for="login-modal-email-input" style="font-size: 14px;">Email</label><input class="form-control" id="login-modal-email-input" name="login" type="email" /></div><div class="form-group"><label class="control-label" for="login-modal-password-input" style="font-size: 14px;">Password</label><input class="form-control" id="login-modal-password-input" name="password" type="password" /></div><input type="hidden" name="post_login_redirect_url" id="post_login_redirect_url" value="https://www.academia.edu/117779061/Use_of_View_Parameters_in_Free_Viewpoint_Image_Synthesis_to_Configure_a_Multi_View_Acquisition_System_with_Lens_Array" autocomplete="off" /><div class="checkbox"><label><input type="checkbox" name="remember_me" id="remember_me" value="1" checked="checked" /><small style="font-size: 12px; margin-top: 2px; display: inline-block;">Remember me on this computer</small></label></div><br><input type="submit" name="commit" value="Log In" class="btn btn-primary btn-block btn-lg js-login-submit" data-disable-with="Log In" /></br></form><script>typeof window?.Aedu?.recaptchaManagedForm === 'function' && window.Aedu.recaptchaManagedForm( document.querySelector('.js-login-form'), document.querySelector('.js-login-submit') );</script><small style="font-size: 12px;"><br />or <a data-target="#login-modal-reset-password-container" data-toggle="collapse" href="javascript:void(0)">reset password</a></small><div class="collapse" id="login-modal-reset-password-container"><br /><div class="well margin-0x"><form class="js-password-reset-form" action="https://www.academia.edu/reset_password" accept-charset="UTF-8" method="post"><input name="utf8" type="hidden" value="✓" autocomplete="off" /><input type="hidden" name="authenticity_token" value="3j3P0l6Ak72IpwyqFI3Ljje4VEZlcjIGw6v3Aq7+t66scRwlKAD8+w3bX0whIi917Od7RCtQ5WruaTzheYs5rw==" autocomplete="off" /><p>Enter the email address you signed up with and we'll email you a reset link.</p><div class="form-group"><input class="form-control" name="email" type="email" /></div><input class="btn btn-primary btn-block g-recaptcha js-password-reset-submit" data-sitekey="6Lf3KHUUAAAAACggoMpmGJdQDtiyrjVlvGJ6BbAj" type="submit" value="Email me a link" /></form></div></div><script> require.config({ waitSeconds: 90 })(["https://a.academia-assets.com/assets/collapse-45805421cf446ca5adf7aaa1935b08a3a8d1d9a6cc5d91a62a2a3a00b20b3e6a.js"], function() { // from javascript_helper.rb $("#login-modal-reset-password-container").on("shown.bs.collapse", function() { $(this).find("input[type=email]").focus(); }); }); </script> </div></div></div><div class="modal-footer"><div class="text-center"><small style="font-size: 12px;">Need an account? <a rel="nofollow" href="https://www.academia.edu/signup">Click here to sign up</a></small></div></div></div></div></div></div><script>// If we are on subdomain or non-bootstrapped page, redirect to login page instead of showing modal (function(){ if (typeof $ === 'undefined') return; var host = window.location.hostname; if ((host === $domain || host === "www."+$domain) && (typeof $().modal === 'function')) { $("#nav_log_in").click(function(e) { // Don't follow the link and open the modal e.preventDefault(); $("#login-modal").on('shown.bs.modal', function() { $(this).find("#login-modal-email-input").focus() }).modal('show'); }); } })()</script> <div id="fb-root"></div><script>window.fbAsyncInit = function() { FB.init({ appId: "2369844204", version: "v8.0", status: true, cookie: true, xfbml: true }); // Additional initialization code. if (window.InitFacebook) { // facebook.ts already loaded, set it up. window.InitFacebook(); } else { // Set a flag for facebook.ts to find when it loads. window.academiaAuthReadyFacebook = true; } };</script> <div id="google-root"></div><script>window.loadGoogle = function() { if (window.InitGoogle) { // google.ts already loaded, set it up. window.InitGoogle("331998490334-rsn3chp12mbkiqhl6e7lu2q0mlbu0f1b"); } else { // Set a flag for google.ts to use when it loads. window.GoogleClientID = "331998490334-rsn3chp12mbkiqhl6e7lu2q0mlbu0f1b"; } };</script> <div class="header--container" id="main-header-container"><div class="header--inner-container header--inner-container-ds2"><div class="header-ds2--left-wrapper"><div class="header-ds2--left-wrapper-inner"><a data-main-header-link-target="logo_home" href="https://www.academia.edu/"><img class="hide-on-desktop-redesign" style="height: 24px; width: 24px;" alt="Academia.edu" src="//a.academia-assets.com/images/academia-logo-redesign-2015-A.svg" width="24" height="24" /><img width="145.2" height="18" class="hide-on-mobile-redesign" style="height: 24px;" alt="Academia.edu" src="//a.academia-assets.com/images/academia-logo-redesign-2015.svg" /></a><div class="header--search-container header--search-container-ds2"><form class="js-SiteSearch-form select2-no-default-pills" action="https://www.academia.edu/search" accept-charset="UTF-8" method="get"><input name="utf8" type="hidden" value="✓" autocomplete="off" /><svg style="width: 14px; height: 14px;" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="search" class="header--search-icon svg-inline--fa fa-search fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M505 442.7L405.3 343c-4.5-4.5-10.6-7-17-7H372c27.6-35.3 44-79.7 44-128C416 93.1 322.9 0 208 0S0 93.1 0 208s93.1 208 208 208c48.3 0 92.7-16.4 128-44v16.3c0 6.4 2.5 12.5 7 17l99.7 99.7c9.4 9.4 24.6 9.4 33.9 0l28.3-28.3c9.4-9.4 9.4-24.6.1-34zM208 336c-70.7 0-128-57.2-128-128 0-70.7 57.2-128 128-128 70.7 0 128 57.2 128 128 0 70.7-57.2 128-128 128z"></path></svg><input class="header--search-input header--search-input-ds2 js-SiteSearch-form-input" data-main-header-click-target="search_input" name="q" placeholder="Search" type="text" /></form></div></div></div><nav class="header--nav-buttons header--nav-buttons-ds2 js-main-nav"><a class="ds2-5-button ds2-5-button--secondary js-header-login-url header-button-ds2 header-login-ds2 hide-on-mobile-redesign" href="https://www.academia.edu/login" rel="nofollow">Log In</a><a class="ds2-5-button ds2-5-button--secondary header-button-ds2 hide-on-mobile-redesign" href="https://www.academia.edu/signup" rel="nofollow">Sign Up</a><button class="header--hamburger-button header--hamburger-button-ds2 hide-on-desktop-redesign js-header-hamburger-button"><div class="icon-bar"></div><div class="icon-bar" style="margin-top: 4px;"></div><div class="icon-bar" style="margin-top: 4px;"></div></button></nav></div><ul class="header--dropdown-container js-header-dropdown"><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/login" rel="nofollow">Log In</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/signup" rel="nofollow">Sign Up</a></li><li class="header--dropdown-row js-header-dropdown-expand-button"><button class="header--dropdown-button">more<svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="caret-down" class="header--dropdown-button-icon svg-inline--fa fa-caret-down fa-w-10" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 320 512"><path fill="currentColor" d="M31.3 192h257.3c17.8 0 26.7 21.5 14.1 34.1L174.1 354.8c-7.8 7.8-20.5 7.8-28.3 0L17.2 226.1C4.6 213.5 13.5 192 31.3 192z"></path></svg></button></li><li><ul class="header--expanded-dropdown-container"><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/about">About</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/press">Press</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://medium.com/@academia">Blog</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/documents">Papers</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/terms">Terms</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/privacy">Privacy</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/copyright">Copyright</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://www.academia.edu/hiring"><svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="briefcase" class="header--dropdown-row-icon svg-inline--fa fa-briefcase fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M320 336c0 8.84-7.16 16-16 16h-96c-8.84 0-16-7.16-16-16v-48H0v144c0 25.6 22.4 48 48 48h416c25.6 0 48-22.4 48-48V288H320v48zm144-208h-80V80c0-25.6-22.4-48-48-48H176c-25.6 0-48 22.4-48 48v48H48c-25.6 0-48 22.4-48 48v80h512v-80c0-25.6-22.4-48-48-48zm-144 0H192V96h128v32z"></path></svg>We're Hiring!</a></li><li class="header--dropdown-row"><a class="header--dropdown-link" href="https://support.academia.edu/"><svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="question-circle" class="header--dropdown-row-icon svg-inline--fa fa-question-circle fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zM262.655 90c-54.497 0-89.255 22.957-116.549 63.758-3.536 5.286-2.353 12.415 2.715 16.258l34.699 26.31c5.205 3.947 12.621 3.008 16.665-2.122 17.864-22.658 30.113-35.797 57.303-35.797 20.429 0 45.698 13.148 45.698 32.958 0 14.976-12.363 22.667-32.534 33.976C247.128 238.528 216 254.941 216 296v4c0 6.627 5.373 12 12 12h56c6.627 0 12-5.373 12-12v-1.333c0-28.462 83.186-29.647 83.186-106.667 0-58.002-60.165-102-116.531-102zM256 338c-25.365 0-46 20.635-46 46 0 25.364 20.635 46 46 46s46-20.636 46-46c0-25.365-20.635-46-46-46z"></path></svg>Help Center</a></li><li class="header--dropdown-row js-header-dropdown-collapse-button"><button class="header--dropdown-button">less<svg aria-hidden="true" focusable="false" data-prefix="fas" data-icon="caret-up" class="header--dropdown-button-icon svg-inline--fa fa-caret-up fa-w-10" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 320 512"><path fill="currentColor" d="M288.662 352H31.338c-17.818 0-26.741-21.543-14.142-34.142l128.662-128.662c7.81-7.81 20.474-7.81 28.284 0l128.662 128.662c12.6 12.599 3.676 34.142-14.142 34.142z"></path></svg></button></li></ul></li></ul></div> <script src="//a.academia-assets.com/assets/webpack_bundles/fast_loswp-bundle-71e03f93a0fba43adc4297a781256a72e56b0d578ac299a0d81b09f4c7bc6f70.js" defer="defer"></script><script>window.loswp = {}; window.loswp.author = 32442920; window.loswp.bulkDownloadFilterCounts = {}; window.loswp.hasDownloadableAttachment = true; window.loswp.hasViewableAttachments = true; // TODO: just use routes for this window.loswp.loginUrl = "https://www.academia.edu/login?post_login_redirect_url=https%3A%2F%2Fwww.academia.edu%2F117779061%2FUse_of_View_Parameters_in_Free_Viewpoint_Image_Synthesis_to_Configure_a_Multi_View_Acquisition_System_with_Lens_Array%3Fauto%3Ddownload"; window.loswp.translateUrl = "https://www.academia.edu/login?post_login_redirect_url=https%3A%2F%2Fwww.academia.edu%2F117779061%2FUse_of_View_Parameters_in_Free_Viewpoint_Image_Synthesis_to_Configure_a_Multi_View_Acquisition_System_with_Lens_Array%3Fshow_translation%3Dtrue"; window.loswp.previewableAttachments = [{"id":113552987,"identifier":"Attachment_113552987","shouldShowBulkDownload":false}]; window.loswp.shouldDetectTimezone = true; window.loswp.shouldShowBulkDownload = true; window.loswp.showSignupCaptcha = false window.loswp.willEdgeCache = false; window.loswp.work = {"work":{"id":117779061,"created_at":"2024-04-20T07:11:14.286-07:00","from_world_paper_id":253177425,"updated_at":"2024-04-20T08:05:55.864-07:00","_data":{"publisher":"The Institute of Image Information and Television Engineers","publication_date":"2006,,","publication_name":"Eizō Jōhō Media Gakkaishi"},"document_type":"paper","pre_hit_view_count_baseline":null,"quality":"high","language":"ja","title":"Use of View-Parameters in Free-Viewpoint Image Synthesis to Configure a Multi-View Acquisition System with Lens Array","broadcastable":false,"draft":null,"has_indexable_attachment":true,"indexable":true}}["work"]; window.loswp.workCoauthors = [32442920]; window.loswp.locale = "en"; window.loswp.countryCode = "SG"; window.loswp.cwvAbTestBucket = ""; window.loswp.designVariant = "ds_vanilla"; window.loswp.fullPageMobileSutdModalVariant = "control"; window.loswp.useOptimizedScribd4genScript = false; window.loswp.appleClientId = 'edu.academia.applesignon';</script><script defer="" src="https://accounts.google.com/gsi/client"></script><div class="ds-loswp-container"><div class="ds-work-card--grid-container"><div class="ds-work-card--container js-loswp-work-card"><div class="ds-work-card--cover"><div class="ds-work-cover--wrapper"><div class="ds-work-cover--container"><button class="ds-work-cover--clickable js-swp-download-button" data-signup-modal="{"location":"swp-splash-paper-cover","attachmentId":113552987,"attachmentType":"pdf"}"><img alt="First page of “Use of View-Parameters in Free-Viewpoint Image Synthesis to Configure a Multi-View Acquisition System with Lens Array”" class="ds-work-cover--cover-thumbnail" src="https://0.academia-photos.com/attachment_thumbnails/113552987/mini_magick20240802-1-slc11t.png?1722620369" /><img alt="PDF Icon" class="ds-work-cover--file-icon" src="//a.academia-assets.com/assets/single_work_splash/adobe.icon-574afd46eb6b03a77a153a647fb47e30546f9215c0ee6a25df597a779717f9ef.svg" /><div class="ds-work-cover--hover-container"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">download</span><p>Download Free PDF</p></div><div class="ds-work-cover--ribbon-container">Download Free PDF</div><div class="ds-work-cover--ribbon-triangle"></div></button></div></div></div><div class="ds-work-card--work-information"><h1 class="ds-work-card--work-title">Use of View-Parameters in Free-Viewpoint Image Synthesis to Configure a Multi-View Acquisition System with Lens Array</h1><div class="ds-work-card--work-authors ds-work-card--detail"><a class="ds-work-card--author js-wsj-grid-card-author ds2-5-body-md ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura"><img alt="Profile image of Takeshi Naemura" class="ds-work-card--author-avatar" src="//a.academia-assets.com/images/s65_no_pic.png" />Takeshi Naemura</a></div><div class="ds-work-card--detail"><p class="ds-work-card--detail ds2-5-body-sm">2006, Eizō Jōhō Media Gakkaishi</p><div class="ds-work-card--work-metadata"><div class="ds-work-card--work-metadata__stat"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">visibility</span><p class="ds2-5-body-sm" id="work-metadata-view-count">…</p></div><div class="ds-work-card--work-metadata__stat"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">description</span><p class="ds2-5-body-sm">6 pages</p></div><div class="ds-work-card--work-metadata__stat"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">link</span><p class="ds2-5-body-sm">1 file</p></div></div><script>(async () => { const workId = 117779061; const worksViewsPath = "/v0/works/views?subdomain_param=api&work_ids%5B%5D=117779061"; const getWorkViews = async (workId) => { const response = await fetch(worksViewsPath); if (!response.ok) { throw new Error('Failed to load work views'); } const data = await response.json(); return data.views[workId]; }; // Get the view count for the work - we send this immediately rather than waiting for // the DOM to load, so it can be available as soon as possible (but without holding up // the backend or other resource requests, because it's a bit expensive and not critical). const viewCount = await getWorkViews(workId); const updateViewCount = (viewCount) => { const viewCountNumber = Number(viewCount); if (!viewCountNumber) { throw new Error('Failed to parse view count'); } const commaizedViewCount = viewCountNumber.toLocaleString(); const viewCountBody = document.getElementById('work-metadata-view-count'); if (viewCountBody) { viewCountBody.textContent = `${commaizedViewCount} views`; } else { throw new Error('Failed to find work views element'); } }; // If the DOM is still loading, wait for it to be ready before updating the view count. if (document.readyState === "loading") { document.addEventListener('DOMContentLoaded', () => { updateViewCount(viewCount); }); // Otherwise, just update it immediately. } else { updateViewCount(viewCount); } })();</script></div><div class="ds-work-card--button-container"><button class="ds2-5-button js-swp-download-button" data-signup-modal="{"location":"continue-reading-button--work-card","attachmentId":113552987,"attachmentType":"pdf","workUrl":"https://www.academia.edu/117779061/Use_of_View_Parameters_in_Free_Viewpoint_Image_Synthesis_to_Configure_a_Multi_View_Acquisition_System_with_Lens_Array"}">See full PDF</button><button class="ds2-5-button ds2-5-button--secondary js-swp-download-button" data-signup-modal="{"location":"download-pdf-button--work-card","attachmentId":113552987,"attachmentType":"pdf","workUrl":"https://www.academia.edu/117779061/Use_of_View_Parameters_in_Free_Viewpoint_Image_Synthesis_to_Configure_a_Multi_View_Acquisition_System_with_Lens_Array"}"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">download</span>Download PDF</button></div></div></div></div><div data-auto_select="false" data-client_id="331998490334-rsn3chp12mbkiqhl6e7lu2q0mlbu0f1b" data-doc_id="113552987" data-landing_url="https://www.academia.edu/117779061/Use_of_View_Parameters_in_Free_Viewpoint_Image_Synthesis_to_Configure_a_Multi_View_Acquisition_System_with_Lens_Array" data-login_uri="https://www.academia.edu/registrations/google_one_tap" data-moment_callback="onGoogleOneTapEvent" id="g_id_onload"></div><div class="ds-top-related-works--grid-container"><div class="ds-related-content--container ds-top-related-works--container"><h2 class="ds-related-content--heading">Related papers</h2><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="0" data-entity-id="117779026" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117779026/Real_Time_Free_Viewpoint_Image_Synthesis_Using_Multi_View_Images_and_on_the_Fly_Estimation_of_View_Dependent_Depth_Map">Real-Time Free-Viewpoint Image Synthesis Using Multi-View Images and on-the-Fly Estimation of View-Dependent Depth Map</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Eizō Jōhō Media Gakkaishi, 2006</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Real-Time Free-Viewpoint Image Synthesis Using Multi-View Images and on-the-Fly Estimation of View-Dependent Depth Map","attachmentId":113552981,"attachmentType":"pdf","work_url":"https://www.academia.edu/117779026/Real_Time_Free_Viewpoint_Image_Synthesis_Using_Multi_View_Images_and_on_the_Fly_Estimation_of_View_Dependent_Depth_Map","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/117779026/Real_Time_Free_Viewpoint_Image_Synthesis_Using_Multi_View_Images_and_on_the_Fly_Estimation_of_View_Dependent_Depth_Map"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="1" data-entity-id="117778907" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117778907/Ray_Space_Coding_Based_on_Free_Viewpoint_Image_Synthesis">Ray-Space Coding Based on Free-Viewpoint Image Synthesis</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Eizō Jōhō Media Gakkaishi, 2006</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Ray-Space Coding Based on Free-Viewpoint Image Synthesis","attachmentId":113552896,"attachmentType":"pdf","work_url":"https://www.academia.edu/117778907/Ray_Space_Coding_Based_on_Free_Viewpoint_Image_Synthesis","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/117778907/Ray_Space_Coding_Based_on_Free_Viewpoint_Image_Synthesis"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="2" data-entity-id="117778927" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117778927/Spatial_Domain_Definition_of_Focus_Measurement_Method_for_Light_Field_Rendering_and_Its_Application_for_Images_Captured_with_Unstructured_Array_of_Cameras">Spatial Domain Definition of Focus Measurement Method for Light Field Rendering and Its Application for Images Captured with Unstructured Array of Cameras</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Eizō Jōhō Media Gakkaishi, 2005</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Spatial Domain Definition of Focus Measurement Method for Light Field Rendering and Its Application for Images Captured with Unstructured Array of Cameras","attachmentId":113552847,"attachmentType":"pdf","work_url":"https://www.academia.edu/117778927/Spatial_Domain_Definition_of_Focus_Measurement_Method_for_Light_Field_Rendering_and_Its_Application_for_Images_Captured_with_Unstructured_Array_of_Cameras","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/117778927/Spatial_Domain_Definition_of_Focus_Measurement_Method_for_Light_Field_Rendering_and_Its_Application_for_Images_Captured_with_Unstructured_Array_of_Cameras"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="3" data-entity-id="84208092" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/84208092/Recovering_Partial_Shape_from_Perspective_Matrix_Using_Simple_Zoom_Lens_Camera_Model">Recovering Partial Shape from Perspective Matrix Using Simple Zoom-Lens Camera Model</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="219990267" href="https://independent.academia.edu/okadayoshihiro">okada yoshihiro</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Ieej Transactions on Electronics, Information and Systems, 2004</p><p class="ds-related-work--abstract ds2-5-body-sm">In the 3D shape recovering using the perspective matrix, the calibration pattern needed to be contained in the input images. Therefore, it was difficult to use the images using zoom for 3D shape recovering. In this research, we used the simple zoom-lens camera model. And we realized the camera calibration in the state where the calibration pattern is not contained in the input images. Moreover, the validity of this research was shown by performing 3D shape recovering from the images using zoom.</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Recovering Partial Shape from Perspective Matrix Using Simple Zoom-Lens Camera Model","attachmentId":89311509,"attachmentType":"pdf","work_url":"https://www.academia.edu/84208092/Recovering_Partial_Shape_from_Perspective_Matrix_Using_Simple_Zoom_Lens_Camera_Model","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/84208092/Recovering_Partial_Shape_from_Perspective_Matrix_Using_Simple_Zoom_Lens_Camera_Model"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="4" data-entity-id="117778889" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117778889/Extraction_and_Viewing_Parameter_Control_of_Objects_in_a_3D_TV_System">Extraction and Viewing Parameter Control of Objects in a 3D TV System</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">IEICE Technical Report; IEICE Tech. Rep., 2009</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Extraction and Viewing Parameter Control of Objects in a 3D TV System","attachmentId":113552890,"attachmentType":"pdf","work_url":"https://www.academia.edu/117778889/Extraction_and_Viewing_Parameter_Control_of_Objects_in_a_3D_TV_System","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/117778889/Extraction_and_Viewing_Parameter_Control_of_Objects_in_a_3D_TV_System"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="5" data-entity-id="118724393" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/118724393/Simultaneous_Color_Image_and_Depth_Map_Acquisition_with_a_Single_Camera_using_a_Color_Filtered_Aperture">Simultaneous Color Image and Depth Map Acquisition with a Single Camera using a Color-Filtered Aperture</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="148846853" href="https://independent.academia.edu/yusukemoriuchi">yusuke moriuchi</a></div><p class="ds-related-work--metadata ds2-5-body-xs">The Journal of The Institute of Image Information and Television Engineers, 2017</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Simultaneous Color Image and Depth Map Acquisition with a Single Camera using a Color-Filtered Aperture","attachmentId":114284028,"attachmentType":"pdf","work_url":"https://www.academia.edu/118724393/Simultaneous_Color_Image_and_Depth_Map_Acquisition_with_a_Single_Camera_using_a_Color_Filtered_Aperture","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/118724393/Simultaneous_Color_Image_and_Depth_Map_Acquisition_with_a_Single_Camera_using_a_Color_Filtered_Aperture"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="6" data-entity-id="115798207" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/115798207/On_the_Volumetric_Reconstruction_of_Linear_Objects_from_Multiple_Visual_Scenes">On the Volumetric Reconstruction of Linear Objects from Multiple Visual Scenes</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="244236186" href="https://independent.academia.edu/SagaSatoshi">Satoshi Saga</a></div><p class="ds-related-work--metadata ds2-5-body-xs">研究報告コンピュータビジョンとイメージメディア(CVIM), 2013</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"On the Volumetric Reconstruction of Linear Objects from Multiple Visual Scenes","attachmentId":112104472,"attachmentType":"pdf","work_url":"https://www.academia.edu/115798207/On_the_Volumetric_Reconstruction_of_Linear_Objects_from_Multiple_Visual_Scenes","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/115798207/On_the_Volumetric_Reconstruction_of_Linear_Objects_from_Multiple_Visual_Scenes"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="7" data-entity-id="85631955" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/85631955/Background_Image_Generation_by_Analyzing_Long_Term_Images_for_Outdoor_Fixed_Camera">Background Image Generation by Analyzing Long-Term Images for Outdoor Fixed Camera</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="948400" href="https://jp.academia.edu/YasutomoKawanishi">Yasutomo Kawanishi</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2011</p><p class="ds-related-work--abstract ds2-5-body-sm">a) E-mail: kawanishi@mm.media.kyoto-u.ac.jp ることは難しいため,これまでに観測した過去の観測 画像をもとにそのシーンで起きている変動を再現する 様々な手法が提案されている. 過去に観測した画像を利用して背景を推定する代表 的な手法として,直近の観測画像に対する統計的なア プローチが多く用いられている.これまでに観測し た複数枚の画像に対し,画素ごとの時間平均やメディ アンを求めることで背景を推定し,それをもとに背 景画像を生成する手法 [5]~[7]は最も単純なものであ る.島井ら [8] はロバスト統計の手法として知られる M 推定を用いて適応的に背景を推定する手法を提案 している.一方,直近の観測期間中にシーン中で起き ている変動を選び出し,こうした変動を陽にモデル化 してそのパラメータを推定するアプローチがある.吉 村ら [9] は色情報のうちの明るさをパラメータ化した 明るさ可変背景モデルを用い,カルマンフィルタを用 いて逐次的に日照成分のパラメータ推定を行うことに よって背景を推定する手法を提案している.また,Liu ら [10] は画像中の背景,前景の変動を ECD (Effect Components Description)としてモデル化し,Mean Shift によって短期間における背景物体の変動を推定 することで背景画像を生成する手法を提案している. これらの手法では,処理に用いる画像の枚数が少ない と,その期間内に起こっていない変化が起きたときに</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Background Image Generation by Analyzing Long-Term Images for Outdoor Fixed Camera","attachmentId":90270493,"attachmentType":"pdf","work_url":"https://www.academia.edu/85631955/Background_Image_Generation_by_Analyzing_Long_Term_Images_for_Outdoor_Fixed_Camera","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/85631955/Background_Image_Generation_by_Analyzing_Long_Term_Images_for_Outdoor_Fixed_Camera"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="8" data-entity-id="85155860" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/85155860/Application_of_Fisheye_View_to_a_curvilinear_focus">Application of Fisheye View to a curvilinear focus</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="4196738" href="https://tamagawa.academia.edu/HidekazuShiozawa">Hidekazu Shiozawa</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2002</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Application of Fisheye View to a curvilinear focus","attachmentId":89941713,"attachmentType":"pdf","work_url":"https://www.academia.edu/85155860/Application_of_Fisheye_View_to_a_curvilinear_focus","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/85155860/Application_of_Fisheye_View_to_a_curvilinear_focus"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-wsj-grid-card" data-collection-position="9" data-entity-id="117778916" data-sort-order="default"><a class="ds-related-work--title js-wsj-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117778916/Three_Dimensional_Image_Information_Media_Setting_Representation_of_Natural_Panorama_Scenes_and_Virtual_View_Generation">Three-Dimensional Image Information Media. Setting Representation of Natural Panorama Scenes and Virtual View Generation</a><div class="ds-related-work--metadata"><a class="js-wsj-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Eizō Jōhō Media Gakkaishi, 2002</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Three-Dimensional Image Information Media. Setting Representation of Natural Panorama Scenes and Virtual View Generation","attachmentId":113552905,"attachmentType":"pdf","work_url":"https://www.academia.edu/117778916/Three_Dimensional_Image_Information_Media_Setting_Representation_of_Natural_Panorama_Scenes_and_Virtual_View_Generation","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-wsj-grid-card-view-pdf" href="https://www.academia.edu/117778916/Three_Dimensional_Image_Information_Media_Setting_Representation_of_Natural_Panorama_Scenes_and_Virtual_View_Generation"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div></div></div><div class="ds-sticky-ctas--wrapper js-loswp-sticky-ctas hidden"><div class="ds-sticky-ctas--grid-container"><div class="ds-sticky-ctas--container"><button class="ds2-5-button js-swp-download-button" data-signup-modal="{"location":"continue-reading-button--sticky-ctas","attachmentId":113552987,"attachmentType":"pdf","workUrl":null}">See full PDF</button><button class="ds2-5-button ds2-5-button--secondary js-swp-download-button" data-signup-modal="{"location":"download-pdf-button--sticky-ctas","attachmentId":113552987,"attachmentType":"pdf","workUrl":null}"><span class="material-symbols-outlined" style="font-size: 20px" translate="no">download</span>Download PDF</button></div></div></div><div class="ds-below-fold--grid-container"><div class="ds-work--container js-loswp-embedded-document"><div class="attachment_preview" data-attachment="Attachment_113552987" style="display: none"><div class="js-scribd-document-container"><div class="scribd--document-loading js-scribd-document-loader" style="display: block;"><img alt="Loading..." src="//a.academia-assets.com/images/loaders/paper-load.gif" /><p>Loading Preview</p></div></div><div style="text-align: center;"><div class="scribd--no-preview-alert js-preview-unavailable"><p>Sorry, preview is currently unavailable. You can download the paper by clicking the button above.</p></div></div></div></div><div class="ds-sidebar--container js-work-sidebar"><div class="ds-related-content--container"><h2 class="ds-related-content--heading">Related papers</h2><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="0" data-entity-id="117778917" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117778917/Three_Dimensional_Image_Information_Media_Telecentric_Capturing_System_for_Acquiring_Light_Ray_Data_of_3_D_Objects">Three-Dimensional Image Information Media. Telecentric Capturing System for Acquiring Light Ray Data of 3-D Objects</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Eizō Jōhō Media Gakkaishi, 2002</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Three-Dimensional Image Information Media. Telecentric Capturing System for Acquiring Light Ray Data of 3-D Objects","attachmentId":113552910,"attachmentType":"pdf","work_url":"https://www.academia.edu/117778917/Three_Dimensional_Image_Information_Media_Telecentric_Capturing_System_for_Acquiring_Light_Ray_Data_of_3_D_Objects","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/117778917/Three_Dimensional_Image_Information_Media_Telecentric_Capturing_System_for_Acquiring_Light_Ray_Data_of_3_D_Objects"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="1" data-entity-id="117977785" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117977785/Effects_of_Method_for_Displaying_Eye_gaze_Location_and_Cursor_Indication_on_Operation_Performance_and_Usability_of_Eye_gaze_Input_System_for_Menu_Selection">Effects of Method for Displaying Eye-gaze Location and Cursor Indication on Operation Performance and Usability of Eye-gaze Input System for Menu Selection</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="54377698" href="https://independent.academia.edu/murataatsuo">atsuo murata</a></div><p class="ds-related-work--metadata ds2-5-body-xs">The Japanese Journal of Ergonomics, 2011</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Effects of Method for Displaying Eye-gaze Location and Cursor Indication on Operation Performance and Usability of Eye-gaze Input System for Menu Selection","attachmentId":113709691,"attachmentType":"pdf","work_url":"https://www.academia.edu/117977785/Effects_of_Method_for_Displaying_Eye_gaze_Location_and_Cursor_Indication_on_Operation_Performance_and_Usability_of_Eye_gaze_Input_System_for_Menu_Selection","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/117977785/Effects_of_Method_for_Displaying_Eye_gaze_Location_and_Cursor_Indication_on_Operation_Performance_and_Usability_of_Eye_gaze_Input_System_for_Menu_Selection"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="2" data-entity-id="117779056" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117779056/Image_Media_Technology_for_Virtual_Reality_Applications_Efficient_Sampling_of_Light_Ray_Data_by_Sequential_Camera_Control">Image Media Technology for Virtual Reality Applications. Efficient Sampling of Light Ray Data by Sequential Camera Control</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Eizō Jōhō Media Gakkaishi, 1998</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Image Media Technology for Virtual Reality Applications. Efficient Sampling of Light Ray Data by Sequential Camera Control","attachmentId":113552925,"attachmentType":"pdf","work_url":"https://www.academia.edu/117779056/Image_Media_Technology_for_Virtual_Reality_Applications_Efficient_Sampling_of_Light_Ray_Data_by_Sequential_Camera_Control","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/117779056/Image_Media_Technology_for_Virtual_Reality_Applications_Efficient_Sampling_of_Light_Ray_Data_by_Sequential_Camera_Control"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="3" data-entity-id="117778933" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117778933/Special_Issue_Image_Technology_of_Next_Generation_Self_Similarity_Modeling_for_Interpolation_and_Data_Compression_of_a_Multi_View_3_D_Image">Special Issue Image Technology of Next Generation. Self-Similarity Modeling for Interpolation and Data Compression of a Multi-View 3-D Image</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">The Journal of the Institute of Television Engineers of Japan, 1994</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Special Issue Image Technology of Next Generation. Self-Similarity Modeling for Interpolation and Data Compression of a Multi-View 3-D Image","attachmentId":113552918,"attachmentType":"pdf","work_url":"https://www.academia.edu/117778933/Special_Issue_Image_Technology_of_Next_Generation_Self_Similarity_Modeling_for_Interpolation_and_Data_Compression_of_a_Multi_View_3_D_Image","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/117778933/Special_Issue_Image_Technology_of_Next_Generation_Self_Similarity_Modeling_for_Interpolation_and_Data_Compression_of_a_Multi_View_3_D_Image"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="4" data-entity-id="117779124" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117779124/Acquisition_of_light_rays_using_telecentric_lens">Acquisition of light rays using telecentric lens</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2003</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Acquisition of light rays using telecentric lens","attachmentId":113553002,"attachmentType":"pdf","work_url":"https://www.academia.edu/117779124/Acquisition_of_light_rays_using_telecentric_lens","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/117779124/Acquisition_of_light_rays_using_telecentric_lens"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="5" data-entity-id="118574774" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/118574774/Shack_Hartmann_Wavefront_Sensor_with_Large_Dynamic_Range_Using_Microlens_Array_Alternately_Aligned_Spherical_Lenses_and_Astigmatic_Lenses">Shack-Hartmann Wavefront Sensor with Large Dynamic Range Using Microlens Array Alternately Aligned Spherical Lenses and Astigmatic Lenses</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="33284087" href="https://wakayama-u.academia.edu/TakanoriNomura">Takanori Nomura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">The Japan Society of Applied Physics, 2016</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Shack-Hartmann Wavefront Sensor with Large Dynamic Range Using Microlens Array Alternately Aligned Spherical Lenses and Astigmatic Lenses","attachmentId":114168374,"attachmentType":"pdf","work_url":"https://www.academia.edu/118574774/Shack_Hartmann_Wavefront_Sensor_with_Large_Dynamic_Range_Using_Microlens_Array_Alternately_Aligned_Spherical_Lenses_and_Astigmatic_Lenses","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/118574774/Shack_Hartmann_Wavefront_Sensor_with_Large_Dynamic_Range_Using_Microlens_Array_Alternately_Aligned_Spherical_Lenses_and_Astigmatic_Lenses"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="6" data-entity-id="79973526" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/79973526/Analysis_and_implementation_of_non_linear_spatial_filtering_for_image_processing">Analysis and implementation of non linear spatial filtering for image processing</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="207154827" href="https://independent.academia.edu/VociF">Francesco Voci</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2004</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Analysis and implementation of non linear spatial filtering for image processing","attachmentId":86509510,"attachmentType":"pdf","work_url":"https://www.academia.edu/79973526/Analysis_and_implementation_of_non_linear_spatial_filtering_for_image_processing","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/79973526/Analysis_and_implementation_of_non_linear_spatial_filtering_for_image_processing"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="7" data-entity-id="117977782" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117977782/A_Proposal_of_Prediction_Model_of_Pointing_Time_in_Eye_gaze_Input_System_based_on_Homing_and_Ballistic_Eye_Movements">A Proposal of Prediction Model of Pointing Time in Eye-gaze Input System based on Homing and Ballistic Eye Movements</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="54377698" href="https://independent.academia.edu/murataatsuo">atsuo murata</a></div><p class="ds-related-work--metadata ds2-5-body-xs">The Japanese Journal of Ergonomics, 2022</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"A Proposal of Prediction Model of Pointing Time in Eye-gaze Input System based on Homing and Ballistic Eye Movements","attachmentId":113709699,"attachmentType":"pdf","work_url":"https://www.academia.edu/117977782/A_Proposal_of_Prediction_Model_of_Pointing_Time_in_Eye_gaze_Input_System_based_on_Homing_and_Ballistic_Eye_Movements","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/117977782/A_Proposal_of_Prediction_Model_of_Pointing_Time_in_Eye_gaze_Input_System_based_on_Homing_and_Ballistic_Eye_Movements"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="8" data-entity-id="56096849" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/56096849/Multiple_attenuation_using_3D_SRME_br_and_mdash_Optimization_of_Multiple_Contribution_Gather_and_mdash">Multiple attenuation using 3D SRME<br/>&mdash;Optimization of Multiple Contribution Gather&mdash</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="117352337" href="https://independent.academia.edu/MasafumiKatou">Masafumi Katou</a></div><p class="ds-related-work--metadata ds2-5-body-xs">BUTSURI-TANSA(Geophysical Exploration)</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Multiple attenuation using 3D SRME\u003cbr/\u003e\u0026mdash;Optimization of Multiple Contribution Gather\u0026mdash","attachmentId":71651196,"attachmentType":"pdf","work_url":"https://www.academia.edu/56096849/Multiple_attenuation_using_3D_SRME_br_and_mdash_Optimization_of_Multiple_Contribution_Gather_and_mdash","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/56096849/Multiple_attenuation_using_3D_SRME_br_and_mdash_Optimization_of_Multiple_Contribution_Gather_and_mdash"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="9" data-entity-id="117779073" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117779073/Interactive_Multi_view_Video_Segmentation_System_using_Spatio_temporal_Information_Propagation">Interactive Multi-view Video Segmentation System using Spatio-temporal Information Propagation</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Eizō Jōhō Media Gakkaishi, 2012</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Interactive Multi-view Video Segmentation System using Spatio-temporal Information Propagation","attachmentId":113552991,"attachmentType":"pdf","work_url":"https://www.academia.edu/117779073/Interactive_Multi_view_Video_Segmentation_System_using_Spatio_temporal_Information_Propagation","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/117779073/Interactive_Multi_view_Video_Segmentation_System_using_Spatio_temporal_Information_Propagation"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="10" data-entity-id="123317477" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/123317477/A_Kansei_Method_for_Retrieving_Images_from_a_Database_Using_Colors">A Kansei Method for Retrieving Images from a Database Using Colors</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="50167568" href="https://independent.academia.edu/KunioKondo">Kunio Kondo</a></div><p class="ds-related-work--metadata ds2-5-body-xs">The Journal of the Institute of Image Information and Television Engineers, 2000</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"A Kansei Method for Retrieving Images from a Database Using Colors","attachmentId":117777241,"attachmentType":"pdf","work_url":"https://www.academia.edu/123317477/A_Kansei_Method_for_Retrieving_Images_from_a_Database_Using_Colors","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/123317477/A_Kansei_Method_for_Retrieving_Images_from_a_Database_Using_Colors"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="11" data-entity-id="78747217" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/78747217/High_Speed_Binocular_Active_Camera_System_for_Capturing_Good_Image_of_a_Moving_Object">High-Speed Binocular Active Camera System for Capturing Good Image of a Moving Object</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="11974847" href="https://independent.academia.edu/Haiyuanwu">Haiyuan wu</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2008</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"High-Speed Binocular Active Camera System for Capturing Good Image of a Moving Object","attachmentId":85684720,"attachmentType":"pdf","work_url":"https://www.academia.edu/78747217/High_Speed_Binocular_Active_Camera_System_for_Capturing_Good_Image_of_a_Moving_Object","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/78747217/High_Speed_Binocular_Active_Camera_System_for_Capturing_Good_Image_of_a_Moving_Object"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="12" data-entity-id="77307285" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/77307285/Selection_Method_of_Correlating_Point_for_Integrating_Data_Produced_by_Laser_Scanning_and_SFM">Selection Method of Correlating Point for Integrating Data Produced by Laser Scanning and SFM</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="4079549" href="https://independent.academia.edu/YoshihiroYasumuro">Yoshihiro Yasumuro</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Journal of Japan Society of Civil Engineers, Ser. F3 (Civil Engineering Informatics), 2016</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Selection Method of Correlating Point for Integrating Data Produced by Laser Scanning and SFM","attachmentId":84703000,"attachmentType":"pdf","work_url":"https://www.academia.edu/77307285/Selection_Method_of_Correlating_Point_for_Integrating_Data_Produced_by_Laser_Scanning_and_SFM","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/77307285/Selection_Method_of_Correlating_Point_for_Integrating_Data_Produced_by_Laser_Scanning_and_SFM"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="13" data-entity-id="117802012" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117802012/MATLAB%E3%81%AB%E3%82%88%E3%82%8B%E6%98%A0%E5%83%8F%E5%87%A6%E7%90%86%E3%82%B7%E3%82%B9%E3%83%86%E3%83%A0%E9%96%8B%E7%99%BA">MATLABによる映像処理システム開発</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="189134939" href="https://independent.academia.edu/ShogoMuramatsu">Shogo Muramatsu</a></div><p class="ds-related-work--metadata ds2-5-body-xs">The Journal of The Institute of Image Information and Television Engineers, 2011</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"MATLABによる映像処理システム開発","attachmentId":113570976,"attachmentType":"pdf","work_url":"https://www.academia.edu/117802012/MATLAB%E3%81%AB%E3%82%88%E3%82%8B%E6%98%A0%E5%83%8F%E5%87%A6%E7%90%86%E3%82%B7%E3%82%B9%E3%83%86%E3%83%A0%E9%96%8B%E7%99%BA","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/117802012/MATLAB%E3%81%AB%E3%82%88%E3%82%8B%E6%98%A0%E5%83%8F%E5%87%A6%E7%90%86%E3%82%B7%E3%82%B9%E3%83%86%E3%83%A0%E9%96%8B%E7%99%BA"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="14" data-entity-id="117779142" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117779142/Three_Dimensional_Image_Handling_of_3_D_Objects_using_Ray_Space">Three-Dimensional Image. Handling of 3-D Objects using Ray Space</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">The Journal of the Institute of Television Engineers of Japan, 1996</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Three-Dimensional Image. Handling of 3-D Objects using Ray Space","attachmentId":113553007,"attachmentType":"pdf","work_url":"https://www.academia.edu/117779142/Three_Dimensional_Image_Handling_of_3_D_Objects_using_Ray_Space","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/117779142/Three_Dimensional_Image_Handling_of_3_D_Objects_using_Ray_Space"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="15" data-entity-id="117977765" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117977765/Identification_of_Conditions_with_High_Speed_and_Accuracy_of_Target_Prediction_Method_by_Switching_from_Ballistic_Eye_Movement_to_Homing_Eye_Movement">Identification of Conditions with High Speed and Accuracy of Target Prediction Method by Switching from Ballistic Eye Movement to Homing Eye Movement</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="54377698" href="https://independent.academia.edu/murataatsuo">atsuo murata</a></div><p class="ds-related-work--metadata ds2-5-body-xs">The Japanese Journal of Ergonomics, 2022</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Identification of Conditions with High Speed and Accuracy of Target Prediction Method by Switching from Ballistic Eye Movement to Homing Eye Movement","attachmentId":113709684,"attachmentType":"pdf","work_url":"https://www.academia.edu/117977765/Identification_of_Conditions_with_High_Speed_and_Accuracy_of_Target_Prediction_Method_by_Switching_from_Ballistic_Eye_Movement_to_Homing_Eye_Movement","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/117977765/Identification_of_Conditions_with_High_Speed_and_Accuracy_of_Target_Prediction_Method_by_Switching_from_Ballistic_Eye_Movement_to_Homing_Eye_Movement"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="16" data-entity-id="51454319" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/51454319/Fusional_characteristics_of_binocular_parallax_as_a_function_of_viewing_angle_and_viewing_distance_of_stereoscopic_picture">Fusional characteristics of binocular parallax as a function of viewing angle and viewing distance of stereoscopic picture</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="59368154" href="https://independent.academia.edu/ShojiroNagata">Shojiro Nagata</a></div><p class="ds-related-work--metadata ds2-5-body-xs">The Journal of the Institute of Television Engineers of Japan</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Fusional characteristics of binocular parallax as a function of viewing angle and viewing distance of stereoscopic picture","attachmentId":69172913,"attachmentType":"pdf","work_url":"https://www.academia.edu/51454319/Fusional_characteristics_of_binocular_parallax_as_a_function_of_viewing_angle_and_viewing_distance_of_stereoscopic_picture","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/51454319/Fusional_characteristics_of_binocular_parallax_as_a_function_of_viewing_angle_and_viewing_distance_of_stereoscopic_picture"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="17" data-entity-id="117778919" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117778919/Development_of_Integral_3D_Display_System_Using_Eye_tracking_Technology">Development of Integral 3D Display System Using Eye-tracking Technology</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="32442920" href="https://u-tokyo.academia.edu/TakeshiNaemura">Takeshi Naemura</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Eizō Jōhō Media Gakkaishi, 2021</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Development of Integral 3D Display System Using Eye-tracking Technology","attachmentId":113552911,"attachmentType":"pdf","work_url":"https://www.academia.edu/117778919/Development_of_Integral_3D_Display_System_Using_Eye_tracking_Technology","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/117778919/Development_of_Integral_3D_Display_System_Using_Eye_tracking_Technology"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="18" data-entity-id="100464010" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/100464010/Comparison_of_GP_and_SAP_in_the_image_processing_filter_construction_using_pathology_images">Comparison of GP and SAP in the image-processing filter construction using pathology images</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="34038591" href="https://independent.academia.edu/MFukumoto">Manabu Fukumoto</a></div><p class="ds-related-work--metadata ds2-5-body-xs">2010 3rd International Congress on Image and Signal Processing, 2010</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"Comparison of GP and SAP in the image-processing filter construction using pathology images","attachmentId":101281321,"attachmentType":"pdf","work_url":"https://www.academia.edu/100464010/Comparison_of_GP_and_SAP_in_the_image_processing_filter_construction_using_pathology_images","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/100464010/Comparison_of_GP_and_SAP_in_the_image_processing_filter_construction_using_pathology_images"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="19" data-entity-id="117632100" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/117632100/The_New_Method_Analysing_Composition_of_Landscape_Pictures_Taking_Tourism_Resources">The New Method Analysing Composition of Landscape Pictures Taking Tourism Resources</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="230230029" href="https://independent.academia.edu/MohamedAbubakar26">Mohamed Abubakar</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Journal of Architecture and Planning (Transactions of AIJ), 2003</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"The New Method Analysing Composition of Landscape Pictures Taking Tourism Resources","attachmentId":113437917,"attachmentType":"pdf","work_url":"https://www.academia.edu/117632100/The_New_Method_Analysing_Composition_of_Landscape_Pictures_Taking_Tourism_Resources","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/117632100/The_New_Method_Analysing_Composition_of_Landscape_Pictures_Taking_Tourism_Resources"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div><div class="ds-related-work--container js-related-work-sidebar-card" data-collection-position="20" data-entity-id="54090845" data-sort-order="default"><a class="ds-related-work--title js-related-work-grid-card-title ds2-5-body-md ds2-5-body-link" href="https://www.academia.edu/54090845/A_Study_on_Kansei_Reaction_in_Visual_Communication_Using_the_Factors_of_Motion_Graphics">A Study on Kansei Reaction in Visual Communication Using the Factors of Motion Graphics</a><div class="ds-related-work--metadata"><a class="js-related-work-grid-card-author ds2-5-body-sm ds2-5-body-link" data-author-id="12077098" href="https://tsukuba.academia.edu/ToshimasaYamanaka">Toshimasa Yamanaka</a></div><p class="ds-related-work--metadata ds2-5-body-xs">Transactions of Japan Society of Kansei Engineering</p><div class="ds-related-work--ctas"><button class="ds2-5-text-link ds2-5-text-link--inline js-swp-download-button" data-signup-modal="{"location":"wsj-grid-card-download-pdf-modal","work_title":"A Study on Kansei Reaction in Visual Communication Using the Factors of Motion Graphics","attachmentId":70623001,"attachmentType":"pdf","work_url":"https://www.academia.edu/54090845/A_Study_on_Kansei_Reaction_in_Visual_Communication_Using_the_Factors_of_Motion_Graphics","alternativeTracking":true}"><span class="material-symbols-outlined" style="font-size: 18px" translate="no">download</span><span class="ds2-5-text-link__content">Download free PDF</span></button><a class="ds2-5-text-link ds2-5-text-link--inline js-related-work-grid-card-view-pdf" href="https://www.academia.edu/54090845/A_Study_on_Kansei_Reaction_in_Visual_Communication_Using_the_Factors_of_Motion_Graphics"><span class="ds2-5-text-link__content">View PDF</span><span class="material-symbols-outlined" style="font-size: 18px" translate="no">chevron_right</span></a></div></div></div><div class="ds-related-content--container"><h2 class="ds-related-content--heading">Related topics</h2><div class="ds-research-interests--pills-container"><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="422" href="https://www.academia.edu/Documents/in/Computer_Science">Computer Science</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="465" href="https://www.academia.edu/Documents/in/Artificial_Intelligence">Artificial Intelligence</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="854" href="https://www.academia.edu/Documents/in/Computer_Vision">Computer Vision</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="2612" href="https://www.academia.edu/Documents/in/Rendering_Computer_Graphics_">Rendering (Computer Graphics)</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="30193" href="https://www.academia.edu/Documents/in/Image_Analysis_Mathematics_">Image Analysis (Mathematics)</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="666431" href="https://www.academia.edu/Documents/in/Image_Synthesis">Image Synthesis</a><a class="js-related-research-interest ds-research-interests--pill" data-entity-id="2740863" href="https://www.academia.edu/Documents/in/Limit_Mathematics_">Limit (Mathematics)</a></div></div></div></div></div><div class="footer--content"><ul class="footer--main-links hide-on-mobile"><li><a href="https://www.academia.edu/about">About</a></li><li><a href="https://www.academia.edu/press">Press</a></li><li><a rel="nofollow" href="https://medium.com/academia">Blog</a></li><li><a href="https://www.academia.edu/documents">Papers</a></li><li><a href="https://www.academia.edu/topics">Topics</a></li><li><a href="https://www.academia.edu/hiring"><svg style="width: 13px; height: 13px; position: relative; bottom: -1px;" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="briefcase" class="svg-inline--fa fa-briefcase fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M320 336c0 8.84-7.16 16-16 16h-96c-8.84 0-16-7.16-16-16v-48H0v144c0 25.6 22.4 48 48 48h416c25.6 0 48-22.4 48-48V288H320v48zm144-208h-80V80c0-25.6-22.4-48-48-48H176c-25.6 0-48 22.4-48 48v48H48c-25.6 0-48 22.4-48 48v80h512v-80c0-25.6-22.4-48-48-48zm-144 0H192V96h128v32z"></path></svg> <strong>We're Hiring!</strong></a></li><li><a href="https://support.academia.edu/"><svg style="width: 12px; height: 12px; position: relative; bottom: -1px;" aria-hidden="true" focusable="false" data-prefix="fas" data-icon="question-circle" class="svg-inline--fa fa-question-circle fa-w-16" role="img" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><path fill="currentColor" d="M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zM262.655 90c-54.497 0-89.255 22.957-116.549 63.758-3.536 5.286-2.353 12.415 2.715 16.258l34.699 26.31c5.205 3.947 12.621 3.008 16.665-2.122 17.864-22.658 30.113-35.797 57.303-35.797 20.429 0 45.698 13.148 45.698 32.958 0 14.976-12.363 22.667-32.534 33.976C247.128 238.528 216 254.941 216 296v4c0 6.627 5.373 12 12 12h56c6.627 0 12-5.373 12-12v-1.333c0-28.462 83.186-29.647 83.186-106.667 0-58.002-60.165-102-116.531-102zM256 338c-25.365 0-46 20.635-46 46 0 25.364 20.635 46 46 46s46-20.636 46-46c0-25.365-20.635-46-46-46z"></path></svg> <strong>Help Center</strong></a></li></ul><ul class="footer--research-interests"><li>Find new research papers in:</li><li><a href="https://www.academia.edu/Documents/in/Physics">Physics</a></li><li><a href="https://www.academia.edu/Documents/in/Chemistry">Chemistry</a></li><li><a href="https://www.academia.edu/Documents/in/Biology">Biology</a></li><li><a href="https://www.academia.edu/Documents/in/Health_Sciences">Health Sciences</a></li><li><a href="https://www.academia.edu/Documents/in/Ecology">Ecology</a></li><li><a href="https://www.academia.edu/Documents/in/Earth_Sciences">Earth Sciences</a></li><li><a href="https://www.academia.edu/Documents/in/Cognitive_Science">Cognitive Science</a></li><li><a href="https://www.academia.edu/Documents/in/Mathematics">Mathematics</a></li><li><a href="https://www.academia.edu/Documents/in/Computer_Science">Computer Science</a></li></ul><ul class="footer--legal-links hide-on-mobile"><li><a href="https://www.academia.edu/terms">Terms</a></li><li><a href="https://www.academia.edu/privacy">Privacy</a></li><li><a href="https://www.academia.edu/copyright">Copyright</a></li><li>Academia ©2024</li></ul></div> </body> </html>