CINXE.COM
Synthetic optical coherence tomography angiographs for detailed retinal vessel segmentation without human annotations - NASA/ADS
<!DOCTYPE html> <!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]--> <!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]--> <!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]--> <!--[if gt IE 8]><!--> <html class="no-js" lang="en"> <!--<![endif]--> <head> <title>Synthetic optical coherence tomography angiographs for detailed retinal vessel segmentation without human annotations - NASA/ADS</title> <!-- favicon --> <link rel="apple-touch-icon" sizes="180x180" href="//styles/favicon/apple-touch-icon.png" /> <link rel="icon" type="image/png" sizes="32x32" href="//styles/favicon/favicon-32x32.png" /> <link rel="icon" type="image/png" sizes="16x16" href="//styles/favicon/favicon-16x16.png" /> <link rel="manifest" href="//styles/favicon/site.webmanifest" /> <link rel="mask-icon" href="//styles/favicon/safari-pinned-tab.svg" color="#5bbad5" /> <meta name="apple-mobile-web-app-title" content="NASA ADS" /> <meta name="application-name" content="NASA ADS" /> <meta name="msapplication-TileColor" content="#ffc40d" /> <meta name="theme-color" content="#ffffff" /> <!-- /favicon --> <link rel="stylesheet" href="/styles/css/styles.css"> <meta name="robots" content="noarchive"> <link rel="canonical" href="http://ui.adsabs.harvard.edu/abs/2023arXiv230610941K/abstract"/> <meta name="description" content="Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that can acquire high-resolution volumes of the retinal vasculature and aid the diagnosis of ocular, neurological and cardiac diseases. Segmenting the visible blood vessels is a common first step when extracting quantitative biomarkers from these images. Classical segmentation algorithms based on thresholding are strongly affected by image artifacts and limited signal-to-noise ratio. The use of modern, deep learning-based segmentation methods has been inhibited by a lack of large datasets with detailed annotations of the blood vessels. To address this issue, recent work has employed transfer learning, where a segmentation network is trained on synthetic OCTA images and is then applied to real data. However, the previously proposed simulations fail to faithfully model the retinal vasculature and do not provide effective domain adaptation. Because of this, current methods are unable to fully segment the retinal vasculature, in particular the smallest capillaries. In this work, we present a lightweight simulation of the retinal vascular network based on space colonization for faster and more realistic OCTA synthesis. We then introduce three contrast adaptation pipelines to decrease the domain gap between real and artificial images. We demonstrate the superior segmentation performance of our approach in extensive quantitative and qualitative experiments on three public datasets that compare our method to traditional computer vision algorithms and supervised training using human annotations. Finally, we make our entire pipeline publicly available, including the source code, pretrained models, and a large dataset of synthetic OCTA images."> <!-- Open Graph --> <meta property="og:type" content="eprint"> <meta property="og:title" content="Synthetic optical coherence tomography angiographs for detailed retinal vessel segmentation without human annotations"> <meta property="og:site_name" content="NASA/ADS"> <meta property="og:description" content="Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that can acquire high-resolution volumes of the retinal vasculature and aid the diagnosis of ocular, neurological and cardiac diseases. Segmenting the visible blood vessels is a common first step when extracting quantitative biomarkers from these images. Classical segmentation algorithms based on thresholding are strongly affected by image artifacts and limited signal-to-noise ratio. The use of modern, deep learning-based segmentation methods has been inhibited by a lack of large datasets with detailed annotations of the blood vessels. To address this issue, recent work has employed transfer learning, where a segmentation network is trained on synthetic OCTA images and is then applied to real data. However, the previously proposed simulations fail to faithfully model the retinal vasculature and do not provide effective domain adaptation. Because of this, current methods are unable to fully segment the retinal vasculature, in particular the smallest capillaries. In this work, we present a lightweight simulation of the retinal vascular network based on space colonization for faster and more realistic OCTA synthesis. We then introduce three contrast adaptation pipelines to decrease the domain gap between real and artificial images. We demonstrate the superior segmentation performance of our approach in extensive quantitative and qualitative experiments on three public datasets that compare our method to traditional computer vision algorithms and supervised training using human annotations. Finally, we make our entire pipeline publicly available, including the source code, pretrained models, and a large dataset of synthetic OCTA images."> <meta property="og:url" content="https://ui.adsabs.harvard.edu/abs/2023arXiv230610941K/abstract"> <meta property="og:image" content="https://ui.adsabs.harvard.edu/styles/img/transparent_logo.svg"> <meta property="article:published_time" content="06/2023"> <meta property="article:author" content="Kreitner, Linus"> <meta property="article:author" content="Paetzold, Johannes C."> <meta property="article:author" content="Rauch, Nikolaus"> <meta property="article:author" content="Chen, Chen"> <meta property="article:author" content="Hagag, Ahmed M."> <meta property="article:author" content="Fayed, Alaa E."> <meta property="article:author" content="Sivaprasad, Sobha"> <meta property="article:author" content="Rausch, Sebastian"> <meta property="article:author" content="Weichsel, Julian"> <meta property="article:author" content="Menze, Bjoern H."> <meta property="article:author" content="Harders, Matthias"> <meta property="article:author" content="Knier, Benjamin"> <meta property="article:author" content="Rueckert, Daniel"> <meta property="article:author" content="Menten, Martin J."> <!-- citation_* --> <meta name="citation_journal_title" content="arXiv e-prints"> <meta name="citation_authors" content="Kreitner, Linus;Paetzold, Johannes C.;Rauch, Nikolaus;Chen, Chen;Hagag, Ahmed M.;Fayed, Alaa E.;Sivaprasad, Sobha;Rausch, Sebastian;Weichsel, Julian;Menze, Bjoern H.;Harders, Matthias;Knier, Benjamin;Rueckert, Daniel;Menten, Martin J."> <meta name="citation_title" content="Synthetic optical coherence tomography angiographs for detailed retinal vessel segmentation without human annotations"> <meta name="citation_date" content="06/2023"> <meta name="citation_firstpage" content="arXiv:2306.10941"> <meta name="citation_doi" content="10.48550/arXiv.2306.10941"> <meta name="citation_language" content="en"> <meta name="citation_keywords" content="Electrical Engineering and Systems Science - Image and Video Processing"> <meta name="citation_keywords" content="Computer Science - Computer Vision and Pattern Recognition"> <meta name="citation_abstract_html_url" content="https://ui.adsabs.harvard.edu/abs/2023arXiv230610941K/abstract"> <meta name="citation_publication_date" content="06/2023"> <meta name="citation_arxiv_id" content="arXiv:2306.10941" /> <link title="schema(PRISM)" rel="schema.prism" href="http://prismstandard.org/namespaces/1.2/basic/" /> <meta name="prism.publicationDate" content="06/2023" /> <meta name="prism.publicationName" content="arXiv" /> <meta name="prism.startingPage" content="arXiv:2306.10941" /> <link title="schema(DC)" rel="schema.dc" href="http://purl.org/dc/elements/1.1/" /> <meta name="dc.identifier" content="doi:10.48550/arXiv.2306.10941" /> <meta name="dc.date" content="06/2023" /> <meta name="dc.source" content="arXiv" /> <meta name="dc.title" content="Synthetic optical coherence tomography angiographs for detailed retinal vessel segmentation without human annotations" /> <meta name="dc.creator" content="Kreitner, Linus"> <meta name="dc.creator" content="Paetzold, Johannes C."> <meta name="dc.creator" content="Rauch, Nikolaus"> <meta name="dc.creator" content="Chen, Chen"> <meta name="dc.creator" content="Hagag, Ahmed M."> <meta name="dc.creator" content="Fayed, Alaa E."> <meta name="dc.creator" content="Sivaprasad, Sobha"> <meta name="dc.creator" content="Rausch, Sebastian"> <meta name="dc.creator" content="Weichsel, Julian"> <meta name="dc.creator" content="Menze, Bjoern H."> <meta name="dc.creator" content="Harders, Matthias"> <meta name="dc.creator" content="Knier, Benjamin"> <meta name="dc.creator" content="Rueckert, Daniel"> <meta name="dc.creator" content="Menten, Martin J."> <!-- twitter card --> <meta name="twitter:card" content="summary_large_image"/> <meta name="twitter:description" content="Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that can acquire high-resolution volumes of the retinal vasculature and aid the diagnosis of ocular, neurological and cardiac diseases. Segmenting the visible blood vessels is a common first step when extracting quantitative biomarkers from these images. Classical segmentation algorithms based on thresholding are strongly affected by image artifacts and limited signal-to-noise ratio. The use of modern, deep learning-based segmentation methods has been inhibited by a lack of large datasets with detailed annotations of the blood vessels. To address this issue, recent work has employed transfer learning, where a segmentation network is trained on synthetic OCTA images and is then applied to real data. However, the previously proposed simulations fail to faithfully model the retinal vasculature and do not provide effective domain adaptation. Because of this, current methods are unable to fully segment the retinal vasculature, in particular the smallest capillaries. In this work, we present a lightweight simulation of the retinal vascular network based on space colonization for faster and more realistic OCTA synthesis. We then introduce three contrast adaptation pipelines to decrease the domain gap between real and artificial images. We demonstrate the superior segmentation performance of our approach in extensive quantitative and qualitative experiments on three public datasets that compare our method to traditional computer vision algorithms and supervised training using human annotations. Finally, we make our entire pipeline publicly available, including the source code, pretrained models, and a large dataset of synthetic OCTA images."/> <meta name="twitter:title" content="Synthetic optical coherence tomography angiographs for detailed retinal vessel segmentation without human annotations"/> <meta name="twitter:site" content="@adsabs"/> <meta name="twitter:domain" content="NASA/ADS"/> <meta name="twitter:image:src" content="https://ui.adsabs.harvard.edu/styles/img/transparent_logo.svg"/> <meta name="twitter:creator" content="@adsabs"/> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <base href="/"> <style> .btn-full-ads { color: #fff !important; background-color: #1a1a1a !important; border-color: #1a1a1a !important; margin-top: 9px !important; padding-bottom: 10px !important; padding-top: 10px !important; } .btn-full-ads:hover, .btn-full-ads:focus, .btn-full-ads:active, .btn-full-ads.active, .open>.dropdown-toggle.btn-full-ads { color: #000 !important; background-color: #ddd !important; border-color: #1a1a1a !important; } .dropdown-toggle:hover .dropdown-menu { display: block; } .navbar-nav.navbar-right:last-child { margin-right: -15px !important; } .navbar-right { @media screen (min-width: $screen-sm) { float: right!important; } } /*the container must be positioned relative:*/ .autocomplete { position: relative; display: inline-block; } .autocomplete-items { position: absolute; border: 1px solid #d4d4d4; border-bottom: none; border-top: none; z-index: 99; /*position the autocomplete items to be the same width as the container:*/ top: 100%; left: 0; right: 0; } .autocomplete-items div { padding: 10px; cursor: pointer; background-color: #fff; border-bottom: 1px solid #d4d4d4; } /*when hovering an item:*/ .autocomplete-items div:hover { background-color: #e9e9e9; } /*when navigating through the items using the arrow keys:*/ .autocomplete-active { background-color: #d7dfec !important; color: #000000; } </style> </head> <body> <div id="aria-announcement-container">Now on home page</div> <div id="app-container"> <div id="body-template-container"> <div class="s-master-page-manager"> <div id="navbar-container"> <div data-widget="NavbarWidget"> <nav class="navbar navbar-inverse"> <div class="container-fluid"> <!-- Brand and toggle get grouped for better mobile display --> <div class=""> <ul class="nav navbar-nav navbar-left"> <li> <a class="navbar-brand" href="/"> <img class="s-ads-icon" src="/styles/img/transparent_logo.svg" alt="ads icon"/> <h1> <b>ads</b></h1> </a> </li> </ul> </div> <!-- Collect the nav links, forms, and other content for toggling --> <div class=""> <!--<div class="nav navbar-nav navbar-right">--> <ul class="nav navbar-nav navbar-right"> <li data-match-route="/"> <a href="/core/never/abs/2023arXiv230610941K/abstract" style="transition: none; " class="btn btn-full-ads"> <i class="fa fa-refresh"></i> Enable full ADS </a> </li> </ul> </div> </div> </nav> </div> </div> <div id="content-container"> <div class="dynamic-container s-dynamic-container"> <div id="abstract-page-layout" class="s-abstract-page-layout"> <div class="row s-stable-search-bar-height s-results-control-row-container hidden-xs"> </div> <div class="row s-dynamic-page-body" id="dynamic-page-body"> <div class="s-abstract-content"> <div class="col-xs-12 col-sm-3 col-md-2" style="" id="left-column"> <div class="nav-container s-nav-container" style="transform: none; width: 100%; position: relative" id="left-column"> <nav> <div class="s-nav-header s-view-nav"> <i class="icon-list"></i> <h3>view </h3> </div> <a href="/abs/2023arXiv230610941K/abstract" data-widget-id="ShowAbstract"> <div class="abstract-nav s-nav s-nav-selected"> <span class="s-content"> Abstract </span> </div> </a> </a> <a href="/abs/2023arXiv230610941K/citations" aria-disabled="true" data-widget-id="ShowCitations"> <div class="abstract-nav s-nav "> <span class="s-content"> Citations <span class="num-items">(1)</span> </span> </div> </a> <a href="/abs/2023arXiv230610941K/references" aria-disabled="true" data-widget-id="ShowReferences"> <div class="abstract-nav s-nav "> <span class="s-content"> References <span class="num-items">(7)</span> </span> </div> </a> <a href="/abs/2023arXiv230610941K/coreads" aria-disabled="true" aria-disabled="true" data-widget-id="ShowCoreads"> <div class="abstract-nav s-nav "> <span class="s-content"> Co-Reads </span> </div> </a> <a href="/abs/2023arXiv230610941K/similar" aria-disabled="true" data-widget-id="ShowSimilar"> <div class="abstract-nav s-nav "> <span class="s-content"> Similar Papers </span> </div> </a> <div aria-disabled="true" data-widget-id="ShowToc"> <div class="abstract-nav s-nav s-nav-inactive"> <span class="s-content"> Volume Content </span> </div> </div> <div href="#" data-widget-id="ShowGraphics"> <div class="abstract-nav s-nav s-nav-inactive"> <span class="s-content"> Graphics </span> </div> </div> <a href="/abs/2023arXiv230610941K/metrics" data-widget-id="ShowMetrics"> <div class="abstract-nav s-nav"> <span class="s-content"> Metrics </span> </div> <a href="/abs/2023arXiv230610941K/exportcitation" data-widget-id="ShowExportcitation__default"> <div class="abstract-nav s-nav "> <span class="content"> Export Citation </span> </div> </a> </nav> </div> </div> <div class="col-xs-12 col-sm-8 col-md-7 col-lg-7 s-middle-column" id="middle-column" style="padding-bottom: 0%"> <!--id is for screen readers--> <div class="main-content-container s-main-content-container" id="main-content" tabindex="-1" style="margin-bottom: 5px"> <div class="print-visible"> <h2 style="margin-left:6.1%;">NASA/ADS</h2> </div> <div id="abstract-title-container" class="s-abstract-title-container"> <div data-widget="ShowAbstract"> <article class="s-abstract-metadata"> <!--<div id="article-navigation">|</div>--> <h2 class="s-abstract-title"> Synthetic optical coherence tomography angiographs for detailed retinal vessel segmentation without human annotations <a href=""></a> </h2> <div id="authors-and-aff" class="s-authors-and-aff"> <ul class="list-inline"> <li class="author"><a href="/search/?q=author%3A%22Kreitner%2C+Linus%22">Kreitner, Linus</a> </li>; <li class="author"><a href="/search/?q=author%3A%22Paetzold%2C+Johannes+C.%22">Paetzold, Johannes C.</a> </li>; <li class="author"><a href="/search/?q=author%3A%22Rauch%2C+Nikolaus%22">Rauch, Nikolaus</a> </li>; <li class="author"><a href="/search/?q=author%3A%22Chen%2C+Chen%22">Chen, Chen</a> </li>; <li class="author"><a href="/search/?q=author%3A%22Hagag%2C+Ahmed+M.%22">Hagag, Ahmed M.</a> </li>; <li class="author"><a href="/search/?q=author%3A%22Fayed%2C+Alaa+E.%22">Fayed, Alaa E.</a> </li>; <li class="author"><a href="/search/?q=author%3A%22Sivaprasad%2C+Sobha%22">Sivaprasad, Sobha</a> </li>; <li class="author"><a href="/search/?q=author%3A%22Rausch%2C+Sebastian%22">Rausch, Sebastian</a> </li>; <li class="author"><a href="/search/?q=author%3A%22Weichsel%2C+Julian%22">Weichsel, Julian</a> </li>; <li class="author"><a href="/search/?q=author%3A%22Menze%2C+Bjoern+H.%22">Menze, Bjoern H.</a> </li>; <li class="author"><a href="/search/?q=author%3A%22Harders%2C+Matthias%22">Harders, Matthias</a> </li>; <li class="author"><a href="/search/?q=author%3A%22Knier%2C+Benjamin%22">Knier, Benjamin</a> </li>; <li class="author"><a href="/search/?q=author%3A%22Rueckert%2C+Daniel%22">Rueckert, Daniel</a> </li>; <li class="author"><a href="/search/?q=author%3A%22Menten%2C+Martin+J.%22">Menten, Martin J.</a> </li> </ul> </div> <div class="s-abstract-text"> <h4 class="sr-only">Abstract</h4> <p> Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that can acquire high-resolution volumes of the retinal vasculature and aid the diagnosis of ocular, neurological and cardiac diseases. Segmenting the visible blood vessels is a common first step when extracting quantitative biomarkers from these images. Classical segmentation algorithms based on thresholding are strongly affected by image artifacts and limited signal-to-noise ratio. The use of modern, deep learning-based segmentation methods has been inhibited by a lack of large datasets with detailed annotations of the blood vessels. To address this issue, recent work has employed transfer learning, where a segmentation network is trained on synthetic OCTA images and is then applied to real data. However, the previously proposed simulations fail to faithfully model the retinal vasculature and do not provide effective domain adaptation. Because of this, current methods are unable to fully segment the retinal vasculature, in particular the smallest capillaries. In this work, we present a lightweight simulation of the retinal vascular network based on space colonization for faster and more realistic OCTA synthesis. We then introduce three contrast adaptation pipelines to decrease the domain gap between real and artificial images. We demonstrate the superior segmentation performance of our approach in extensive quantitative and qualitative experiments on three public datasets that compare our method to traditional computer vision algorithms and supervised training using human annotations. Finally, we make our entire pipeline publicly available, including the source code, pretrained models, and a large dataset of synthetic OCTA images. </p> </div> <br> <dl class="s-abstract-dl-horizontal"> <dt>Publication:</dt> <dd> <div id="article-publication">arXiv e-prints</div> </dd> <dt>Pub Date:</dt> <dd>June 2023</dd> <dt>DOI:</dt> <dd> <span> <a href="/link_gateway/2023arXiv230610941K/doi:10.48550/arXiv.2306.10941" target="_blank" rel="noopener">10.48550/arXiv.2306.10941</a> <i class="fa fa-external-link"></i> </span> </dd> <dt>arXiv:</dt> <dd> <span> <a href="/link_gateway/2023arXiv230610941K/arXiv:2306.10941" target="_blank" rel="noopener">arXiv:2306.10941</a> <i class="fa fa-external-link"></i> </span> </dd> <dt>Bibcode:</dt> <dd> <a href="/abs/2023arXiv230610941K/abstract"> 2023arXiv230610941K </a> <i class="icon-help" title="The bibcode is assigned by the ADS as a unique identifier for the paper."></i> </dd> <dt>Keywords:</dt> <dd> <ul class="list-inline"> <li>Electrical Engineering and Systems Science - Image and Video Processing;</li> <li>Computer Science - Computer Vision and Pattern Recognition</li> </ul> </dd> <dt>E-Print:</dt> <dd> Currently under review </dd> </dl> </article> </div> <div data-widget="ShowCitations"></div> <div data-widget="ShowReferences"></div> <div data-widget="ShowCoreads"></div> <div data-widget="ShowSimilar"></div> <div data-widget="ShowTableofcontents"></div> <div data-widget="ShowGraphics"></div> <div data-widget="ShowExportcitation" data-origin="abstract"></div> <div data-widget="ShowMetrics" data-allow-redirect="false"></div> <div data-widget="MetaTagsWidget"></div> </div> </div> </div> <div class="s-right-col-container col-xs-12 col-sm-12 col-md-3 col-lg-2 s-right-column" id="right-col-container" > <div data-widget="ShowResources"> <div data-reactroot="" class="s-right-col-widget-container" style="padding: 10px" > <div> <div class="resources__container"> <div class="resources__full__list"> <div class="resources__header__row"> <i class="fa fa-file-text-o" aria-hidden="true"> </i> <div class="resources__header__title">full text sources</div> </div> <div class="resources__content"> <div class="resources__content__title">arXiv</div> <div class="resources__content__links"> <span> <a href="/link_gateway/2023arXiv230610941K/EPRINT_PDF" rel="noopener" class="resources__content__link unlock" > <i class="fa fa-file-pdf-o" aria-hidden="true"> </i> </a> <div class="resources__content__link__separator">|</div> </span> <span> <a href="/link_gateway/2023arXiv230610941K/EPRINT_HTML" rel="noopener" class="resources__content__link unlock" > <i class="fa fa-file-text" aria-hidden="true"> </i> </a> </span> </div> </div> </div> </div> <div data-widget="ShowAssociated"> </div> </div> </div> </div> <div data-widget="ShowGraphicsSidebar"> </div> </div> </div> </div> </div> </div> </div> <div id="footer-container"> <div data-widget="FooterWidget"> <div class="footer s-footer"> <footer> <div class="__footer_wrapper"> <div class="__footer_brand"> 漏 The SAO/NASA Astrophysics Data System <div class="__footer_brand_extra"> <p> <i class="fa fa-envelope"></i> adshelp[at]cfa.harvard.edu </p> <p> The ADS is operated by the Smithsonian Astrophysical Observatory under NASA Cooperative Agreement <em>NNX16AC86A</em> </p> </div> <div class="__footer_brand_logos"> <a href="http://www.nasa.gov" target="_blank" rel="noopener"> <img src="/styles/img/nasa.svg" alt="NASA logo" id="nasa-logo"> </a> <a href="http://www.si.edu" target="_blank" rel="noopener"> <img id="smithsonian-logo" src="/styles/img/smithsonian.svg" alt="Smithsonian logo"> </a> <a href="https://www.cfa.harvard.edu/" target="_blank" rel="noopener"> <img src="/styles/img/cfa.png" title="Harvard Center for Astrophysics logo" id="cfa-logo"> </a> </div> </div> <div class="__footer_list"> <div class="__footer_list_title"> Resources </div> <ul class="__footer_links"> <li> <a href="/about/" target="_blank" rel="noopener"> <i class="fa fa-question-circle"></i> About ADS </a> </li> <li> <a href="//ui.adsabs.harvard.edu/help/" target="_blank" rel="noopener"> <i class="fa fa-info-circle"></i> ADS Help </a> </li> <li> <a href="//ui.adsabs.harvard.edu/help/whats_new/" target="_blank" rel="noopener"> <i class="fa fa-bullhorn"></i> What's New </a> </li> <li> <a href="/about/careers/" target="_blank" rel="noopener"> <i class="fa fa-group"></i> Careers@ADS </a> </li> </ul> </div> <div class="__footer_list"> <div class="__footer_list_title"> Social </div> <ul class="__footer_links"> <li> <a href="//twitter.com/adsabs" target="_blank" rel="noopener"> <i class="fa fa-twitter"></i> @adsabs </a> </li> <li> <a href="//ui.adsabs.harvard.edu/blog/" target="_blank" rel="noopener"> <i class="fa fa-newspaper-o"></i> ADS Blog </a> </li> </ul> </div> <div class="__footer_list"> <div class="__footer_list_title"> Project </div> <ul class="__footer_links"> <li> <a href="/core/never">Switch to full ADS</a> </li> <li> <a href="https://adsisdownorjustme.herokuapp.com/" target="_blank" rel="noopener">Is ADS down? (or is it just me...)</a> </li> <li> <a href="http://www.si.edu" target="_blank" rel="noopener">Smithsonian Institution</a> </li> <li> <a href="http://www.si.edu/Privacy" target="_blank" rel="noopener">Smithsonian Privacy Notice</a> </li> <li> <a href="http://www.si.edu/Termsofuse" target="_blank" rel="noopener">Smithsonian Terms of Use</a> </li> <li> <a href="http://www.cfa.harvard.edu/sao" target="_blank" rel="noopener">Smithsonian Astrophysical Observatory</a> </li> <li> <a href="http://www.nasa.gov" target="_blank" rel="noopener">NASA</a> </li> </ul> </div> </div> </footer> </div> </div> </div> </div> </div> </div> <div id="darkSwitch" class="darkmode-toggle hidden" title="Turn on dark mode">馃寭</div> <script> function autocomplete(searchBox, autoValues) { // Arguments: the text field element and an array of possible autocompleted values var currentFocus; // selected autocomplete option // Function to be run when the user types searchBox.addEventListener("input", function(e) { var a, b, i, val = this.value; // close any list of autocomplete values closeAllLists(); if (!val) { return false;} val = val.split(/\s+/); val = val[val.length - 1]; if (!val) { return false;} currentFocus = -1; // Create a DIV element that will contain the items (values): a = document.createElement("DIV"); a.setAttribute("id", this.id + "autocomplete-list"); a.setAttribute("class", "autocomplete-items"); // Append the DIV element as a child of the autocomplete container: this.parentNode.appendChild(a); for (i = 0; i < autoValues.length; i++) { // Check if the item starts with the same letters as the text field value: if (autoValues[i].match.substr(0, val.length).toUpperCase() == val.toUpperCase()) { // Create a DIV element for each matching element: b = document.createElement("DIV"); b.innerHTML = autoValues[i].label; if ("desc" in autoValues[i]) { b.innerHTML += " <i>" + autoValues[i].desc + "</i>"; } if (autoValues[i].value.startsWith(autoValues[i].match) ) { b.innerHTML += " | <strong>" + autoValues[i].match.substr(0, val.length) + "</strong>"; b.innerHTML += autoValues[i].match.substr(val.length); } // Insert a input field that will hold the current array item's value: b.innerHTML += "<input type='hidden' value='" + autoValues[i].value + "'>"; // Listen to clicks on the item value (DIV element): b.addEventListener("click", function(e) { var terms = searchBox.value.split(/\s+/); // Remove the current part of the input used for matching terms.pop(); // Insert the value for the autocomplete text field: terms.push(this.getElementsByTagName("input")[0].value); searchBox.value = terms.join(" "); // Move cursor position inside quotes/parenthesis if needed searchBox.focus(); if (searchBox.value[searchBox.value.length-1] === '"' || searchBox.value[searchBox.value.length-1] === ')') { searchBox.setSelectionRange(searchBox.value.length-1, searchBox.value.length-1); } // Close the list of autocompleted values closeAllLists(); }); a.appendChild(b); } } if (a.children.length > 0) { // By default, enter will select the first entry currentFocus = 0; addActive(a.children); } }); /*execute a function presses a key on the keyboard:*/ searchBox.addEventListener("keydown", function(e) { var x = document.getElementById(this.id + "autocomplete-list"); if (x) x = x.getElementsByTagName("div"); if (e.keyCode == 40) { // If the arrow DOWN key is pressed, increase the currentFocus variable: currentFocus++; addActive(x); } else if (e.keyCode == 38) { //up // If the arrow UP key is pressed, decrease the currentFocus variable: currentFocus--; /*and and make the current item more visible:*/ addActive(x); } else if (e.keyCode == 13) { // If the ENTER key is pressed: if (currentFocus > -1) { // Prevent the form from being submitted: e.preventDefault(); // Simulate a click on the "active" item: if (x) x[currentFocus].click(); currentFocus = -1; } } }); function addActive(x) { // Classify an item as "active": if (!x) return false; // Remove the "active" class on all items: removeActive(x); if (currentFocus >= x.length) currentFocus = 0; if (currentFocus < 0) currentFocus = (x.length - 1); // Add class "autocomplete-active": x[currentFocus].classList.add("autocomplete-active"); } function removeActive(x) { // Remove the "active" class from all autocomplete items: for (var i = 0; i < x.length; i++) { x[i].classList.remove("autocomplete-active"); } } function closeAllLists(elmnt) { // Close all autocomplete lists in the document, except the one passed as an argument: var x = document.getElementsByClassName("autocomplete-items"); for (var i = 0; i < x.length; i++) { if (elmnt != x[i] && elmnt != searchBox) { x[i].parentNode.removeChild(x[i]); } } } // Any other clicks in the document: document.addEventListener("click", function (e) { closeAllLists(e.target); }); } var autoList = [ { value: 'author:""', label: 'Author', match: 'author:"' }, { value: 'author:"^"', label: 'First Author', match: 'first author' }, { value: 'author:"^"', label: 'First Author', match: 'author:"^' }, { value: 'bibcode:""', label: 'Bibcode', desc: 'e.g. bibcode:1989ApJ...342L..71R', match: 'bibcode:"' }, { value: 'bibstem:""', label: 'Publication', desc: 'e.g. bibstem:ApJ', match: 'bibstem:"' }, { value: 'bibstem:""', label: 'Publication', desc: 'e.g. bibstem:ApJ', match: 'publication (bibstem)' }, { value: 'arXiv:', label: 'arXiv ID', match: 'arxiv:' }, { value: 'doi:', label: 'DOI', match: 'doi:' }, { value: 'full:""', label: 'Full text search', desc: 'title, abstract, and body', match: 'full:' }, { value: 'full:""', label: 'Full text search', desc: 'title, abstract, and body', match: 'fulltext' }, { value: 'full:""', label: 'Full text search', desc: 'title, abstract, and body', match: 'text' }, { value: 'year:', label: 'Year', match: 'year' }, { value: 'year:1999-2005', label: 'Year Range', desc: 'e.g. 1999-2005', match: 'year range' }, { value: 'aff:""', label: 'Affiliation', match: 'aff:' }, { value: 'abs:""', label: 'Search abstract + title + keywords', match: 'abs:' }, { value: 'database:astronomy', label: 'Limit to papers in the astronomy database', match: 'database:astronomy' }, { value: 'database:physics', label: 'Limit to papers in the physics database', match: 'database:physics' }, { value: 'title:""', label: 'Title', match: 'title:"' }, { value: 'orcid:', label: 'ORCiD identifier', match: 'orcid:' }, { value: 'object:', label: 'SIMBAD object (e.g. object:LMC)', match: 'object:' }, { value: 'property:refereed', label: 'Limit to refereed', desc: '(property:refereed)', match: 'refereed' }, { value: 'property:refereed', label: 'Limit to refereed', desc: '(property:refereed)', match: 'property:refereed' }, { value: 'property:notrefereed', label: 'Limit to non-refereed', desc: '(property:notrefereed)', match: 'property:notrefereed' }, { value: 'property:notrefereed', label: 'Limit to non-refereed', desc: '(property:notrefereed)', match: 'notrefereed' }, { value: 'property:eprint', label: 'Limit to eprints', desc: '(property:eprint)', match: 'eprint' }, { value: 'property:eprint', label: 'Limit to eprints', desc: '(property:eprint)', match: 'property:eprint' }, { value: 'property:openaccess', label: 'Limit to open access', desc: '(property:openaccess)', match: 'property:openaccess' }, { value: 'property:openaccess', label: 'Limit to open access', desc: '(property:openaccess)', match: 'openaccess' }, { value: 'doctype:software', label: 'Limit to software', desc: '(doctype:software)', match: 'software' }, { value: 'doctype:software', label: 'Limit to software', desc: '(doctype:software)', match: 'doctype:software' }, { value: 'property:inproceedings', label: 'Limit to papers in conference proceedings', desc: '(property:inproceedings)', match: 'proceedings' }, { value: 'property:inproceedings', label: 'Limit to papers in conference proceedings', desc: '(property:inproceedings)', match: 'property:inproceedings' }, { value: 'citations()', label: 'Citations', desc: 'Get papers citing your search result set', match: 'citations(' }, { value: 'references()', label: 'References', desc: 'Get papers referenced by your search result set', match: 'references(' }, { value: 'trending()', label: 'Trending', desc: 'Get papers most read by users who recently read your search result set', match: 'trending(' }, { value: 'reviews()', label: 'Review Articles', desc: 'Get most relevant papers that cite your search result set', match: 'reviews(' }, { value: 'useful()', label: 'Useful', desc: 'Get papers most frequently cited by your search result set', match: 'useful(' }, { value: 'similar()', label: 'Similar', desc: 'Get papers that have similar full text to your search result set', match: 'similar(' }, ]; // initiate the autocomplete function on the "q" element, and pass along the operators array as possible autocomplete values: inputBox = document.getElementById("q") if (inputBox) { inputBox.focus() // autofucs inputBox.setSelectionRange(inputBox.value.length, inputBox.value.length); // bring cursor to the end autocomplete(inputBox, autoList); } </script> <script> (function() { // turn off no-js if we have javascript document.documentElement.className = document.documentElement.className.replace("no-js", "js"); function getCookie(cname) { var name = cname + "="; var decodedCookie = decodeURIComponent(document.cookie); var ca = decodedCookie.split(';'); for (var i = 0; i < ca.length; i++) { var c = ca[i]; while (c.charAt(0) == ' ') { c = c.substring(1); } if (c.indexOf(name) == 0) { return c.substring(name.length, c.length); } } return ""; } (function() { // looks for the cookie, and sets true if its 'always' const coreCookie = getCookie('core') === 'always'; // only load bumblebee if we detect the core cookie and we are on abstract page if (coreCookie || (!(/^\/abs\//.test(document.location.pathname)) && !coreCookie)) { return; } window.__PRERENDERED = true; const addScript = function(args, cb) { const script = document.createElement('script'); Object.keys(args).forEach((key) => { script.setAttribute(key, args[key]); }); script.onload = function() { cb && cb(script); }; document.body.appendChild(script); } window.require = { waitSeconds: 0, baseUrl: '/' }; addScript({ src: '/libs/require.js' }, () => { addScript({ src: '/config/shim.js' }); }); })(); })(); </script> </body> </html>