CINXE.COM
Software Discovery Index Meeting Report
<!DOCTYPE html> <html> <head> <style type="text/css"> body {text-align:center; font-family:Arial, Helvetica, sans-serif; background:#000} @media only screen and (min-width : 768px) { /*and (min-width : 1224px) {*/ .small-only{display:none;} .rounded-corners {border-radius:5px;} .rounded-corners3 {-moz-border-radius:5px; -webkit-border-radius:5px; -khtml-border-radius:5px; border-radius: 5px;} #outer_container {width:1010px; margin:0 auto; text-align:left; border:red groove 0px; position:relative;} #handshake {width:8px; height:8px; position:absolute; right:0px;} #top_social_bar {float:right; margin:0 20px 0 0; position:relative; top:3px; z-index:999;} #top_nav_container {width:1000px; position:relative; margin-top:7px; z-index:99; text-align:center; border:red groove 0px;} #mega_nav {float:left; /*position:relative; font-size:12px;*/} #top_drop_nav {float:left;} #main_container {clear:both; width:998px; background:#dd9800; background-position:top; border:#ccc groove 1px;} #top {width:998px; margin-bottom:10px; -moz-border-radius-topleft:20px 20px; -moz-border-radius-topright:20px 20px; -webkit-border-top-left-radius:20px 20px; -webkit-border-top-right-radius:20px 20px; border-top-left-radius:20px 20px; border-top-right-radius:20px 20px; border:red groove 0px;} #logo_container {float:left; border:#ccc groove 0px;} #logo_text {padding:10px 10px 5px 20px; margin:20px 0 0 20px; font-size:36px;} #logo2_text {font-style:italic; font-size:14px; text-align:center;} #banner_link {float:left; width:400px; border:#ccc groove 0px; display:block;} #top_right_container {float:right; width:475px; text-align:right; border:#ccc groove 0px;} #search_box {clear:right; float:right; width:200px; margin:5px 20px 0 0; border:blue groove 0px;} .logout_container {float:right; margin-top:5px;} #upper_right_signup {clear:right; float:right; width:250px; font-size:12px; padding:7px 20px 0px 0; color:#333; border:blue groove 0px;} #xtra_link{ float:right; font-size:12px; text-decoration:none; margin:0px 20px 0 0;} #xtra_fields{ clear:both; display:none; position:relative; left:100px; width:300px; margin:25px 20px 0 20px; padding:10px; font-size:12px; text-align:center; background:#F2EFEF; border-radius:10px; border:#ccc groove 1px; } label.error {float:left; width:75px; font-weight:normal; color:red; } .hideifbig {display:none;} .hideifsmall {} .formfont {font-size:10px;} .forme {font-size:12px; width:100px;} .formp {font-size:12px; width:60px} .cart_button_font {font-size:10px; border-radius:5px;} #header_center_top {float:right; position:relative; left:75px; color:red; text-align:center; z-index:1; border:blue groove 0px;} #center_top {font-size:20px; font-weight:bold; padding-top:25px; text-align:center; border:red groove 0px;} #top_hor_nav {clear:left; width:998px; padding:5px 0; margin-top:10px; font-size:12px; text-align:center; background:#fff; border-top:#ccc solid 1px; border-bottom:#ccc solid 1px;} #bottom_hor_nav {display:none; clear:both; width:998px; padding:5px 0; font-size:12px; text-align:center; background:#fff; border-top:#ccc solid 1px; border-bottom:#ccc solid 1px;} #mobile {display:none;} #dev_frags_top {position:relative; top:15px; text-align:center;} #dev_frags_left {float:left; margin:15px 15px 0 0; background-color:transparent; text-align:center; width:230;} #dev_frags_right {float:right; margin:15px 0 0 15px; background-color:transparent; text-align:center; width:; border:red groove 0px;} #frags_left_container {float:left; margin:0 15px 0 0; background-color:transparent; text-align:center; width:230;} #frags_right_container {float:right; margin:0 0 0 15px; background-color:transparent; text-align:center; width:;} .frag_l {clear:left; float:left; margin:0 15px 0px 0; width:;} .frag_r {clear:right; float:right; margin:0 0 0px 15px; width:;} .frag {} #left-nav-wrapper {float:left; width:230px; border:#ccc groove 0px;} #left-nav {margin:15px 15px 15px 15px; padding:15px; font-size:12px; border-radius:20px; background:#fafafa; border:#ccc groove 1px;} #below-nav {margin:15px; padding:15px 15px 0 15px; font-size:13px; border-radius:20px; ; border:#ccc groove 1px;} #page_views {text-align:center; color:#ccc; border:red groove 0px;} #content_container_wrapper {margin-top:20px; float:left; width:968px; ; border-left:#ccc groove 0px; position:relative; left:15px; } #content_container {font-size:14px; line-height:18px; margin:15px 0; padding:15px; border-radius:5px; background:#fff; } #sitewide_robot_nav {width:900px; margin:10px auto; padding:10px; font-size:11px; line-height:12px; color:#838383; text-align:justify; border:red groove 0px;} #prod-h1-container {width:70%; margin:0 auto; line-height:24px; border:red groove 0px;} } /*@media only screen and (max-width : 360px) {*/ @media only screen and (max-device-width : 767px) { /*and (max-device-width : 480px) {*/ .big-only{display:none;} #outer_container {width:100%; margin:0 auto; text-align:left; border:red groove 0px; position:relative;} #handshake { display:none; width:8px; height:8px; background:#333; position:absolute; right:0px;} #top_social_bar { display:none; float:right; margin:0 20px 0 0; position:relative; top:3px; z-index:999;} #top_nav_container { display:none; width:px; position:relative; margin-top:7px; z-index:99; text-align:center; border:red groove 0px;} #mega_nav {float:left; position:relative; font-size:12px;} #top_drop_nav {float:left;} #main_container {clear:both; background:#dd9800; background-position:top;} #top {text-align:center; margin-bottom:10px; border:red groove 0px;} #logo_container {border:#ccc groove 0px;} #logo_text {padding:10px 10px 5px 20px; font-size:18px;} #logo2_text {font-style:italic; font-size:18px; text-align:center;} #banner_link {float:left; width:100%; border:#ccc groove 0px; display:block;} #top_right_container {/*display:none; float:right; width:475px; text-align:right; border:#ccc groove 0px;*/} #search_box { display:none; clear:right; float:right; width:200px; margin:5px 20px 0 0; border:blue groove 0px;} .logout_container {float:left; padding:0 0 10px 10px; margin:5px 0px 0 0; border:red solid 0px;} #cart_button {font-size:16px; position:relative; top:-5px;} #upper_right_signup { display:none; clear:right; float:right; width:250px; font-size:16px; padding:7px 20px 0px 0; color:#333; border:blue groove 0px;} #xtra_link{ float:left; font-size:16px; text-decoration:none; padding-bottom:10px; margin:0px 8px;} #xtra_fields{ clear:both; display:none; width:100%; margin:20px 0; padding:10px 0; font-size:16px; text-align:center; background:#F2EFEF; border:#ccc groove 1px; } label.error {float:left; width:75px; font-weight:normal; color:red; } .hideifbig {} .hideifsmall {display:none;} .formfont {float:right; font-size:16px; margin-right:50px;} .forme {font-size:16px; width:150px;} .formp {font-size:16px; width:60px} .cart_button_font {font-size:16px;} #header_center_top {color:red; text-align:center; z-index:1; border:blue groove 0px;} #center_top {clear:both; font-size:20px; font-weight:bold; text-align:center; border:red groove 0px;} #top_hor_nav {clear:left; width:-22px; /*padding:5px 20px;*/ padding:20px 0 20px 20px; margin:10px 0; font-size:24px; line-height:35px; text-align:center; background:#fff; border-top:#ccc solid 1px; border-bottom:#ccc solid 1px;} #bottom_hor_nav {clear:both; width:-22px; /*padding:5px 20px;*/ padding:20px 0 20px 20px; font-size:24px; line-height:35px; text-align:center; background:#fff; border-top:#ccc solid 1px; border-bottom:#ccc solid 1px;} #mobile {text-align:center;} #dev_frags_top {font-size:24px;} #left-nav-wrapper { display:none; float:left; width:230px; border:#ccc groove 0px;} #left-nav {display:none; margin:15px 15px 15px 15px; padding:15px; font-size:12px; border-radius:20px; background:#fafafa; border:#ccc groove 1px;} #below-nav {margin:15px; padding:15px 15px 0 15px; font-size:13px; border-radius:20px; ; border:#ccc groove 1px;} #page_views {text-align:center; color:#ccc; border:red groove 0px;} #content_container_wrapper {float:left; margin-top:-50px; width:100%; overflow:hidden; ; border-left:#ccc groove 0px; } #content_container {font-size:16px; line-height:18px; padding:5px 15px 15px 15px; margin:50px 0 15px 0; /*padding:15px; border-radius:20px;*/ background:#fff;} #content_container img {float:none !important; width:90% !important; height:90% !important;} #sitewide_robot_nav {margin:10px auto; padding:10px; font-size:11px; line-height:12px; color:#838383; text-align:justify; border:red groove 0px;} /*h1{position:relative; top:-5px;}*/ #prod-h1-container {width:50%; margin:0 auto 5px auto; line-height:24px; border:red groove 0px;} } #content-wrapper {padding:0 15px 5px 15px;} #rss_container {margin:-10px 20px 0 20px;} #page_footer {clear:both; margin-top:10px; text-align:center; font-size:12px; line-height:18px; border-radius:10px; padding:1px; border:green groove 0px;} #site_footer {clear:both; margin:15px 0; text-align:center; font-size:12px; line-height:18px; border:green groove 0px;} h1{font-size:20px; padding:10px 0; line-height:24px;} h2{font-size:16px; margin:0 0 3px 0;} h3{font-size:14px; margin:0 0 5px 0;} a.logo:link, a.logo:visited {color:#ffffc4; text-decoration: none;} a.logo:hover, a.logo:active {color:#85fef2;} a.hor:link, a.hor:visited {font-family:Arial,Helvetica,sans-serif; color:#3C779F; text-decoration: none;} a.hor:hover, a.hor:active {color:red;} a.nav:link, a.nav:visited {font-family:Arial,Helvetica,sans-serif; color:#3C779F; text-decoration: none;} a.nav:hover, a.nav:active {color:red;} .indent{margin-left:10px;} .bold{font-weight:bold;} .bold-indent{margin-left:10px; font-weight:bold;} .center{width:100%; text-align:center; border:#red groove 1px;} .bold-center{font-weight:bold; width:100%; text-align:center; border:#red groove 1px;} .rev-bm{position:relative; top:-30px;} .clear {clear:both;} .px12{font-size:12px;} .px10{font-size:10px;} </style> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=1"> <!--[if lte IE 8]> <link rel="stylesheet" href="css/ie.css" media="screen" /> <![endif]--> <link rel="stylesheet" href="//www.SoftwareDiscoveryIndex.org/css/style.css?v=1.3" media="all" /> <script src="//code.jquery.com/jquery-1.9.1.js"></script> <script src="//apis.google.com/js/plusone.js"></script> <script src="//www.SoftwareDiscoveryIndex.org/js/main.js"></script> <script src="//www.SoftwareDiscoveryIndex.org/js/jquery.slider.js" ></script> <script src="//www.SoftwareDiscoveryIndex.org/js/validate.js"></script> <script> $(document).ready(function(){ $("#loginform").validate(); }); </script> <title>Software Discovery Index Meeting Report</title> <meta name="description" content="Read the Software Discovery Index Meeting Report and learn what others think about it." /> <meta name="robots" content="index, follow, noodp, noydir" /> <link rel="canonical" href="https://www.softwarediscoveryindex.org/" /> <!-- visitor tracking via ajax --> <script> $(document).ready(function(){ var querystring = 'c= cookieEl.style.display = 'block'&pid=1&vip=8.222.208.146&vua=Mozilla%2F4.0+%28compatible%3B+MSIE+7.0%3B+Windows+NT+6.0%3B+SLCC1%3B+.NET+CLR+2.0.50727%3B+.NET+CLR+3.0.04506%3B+.NET+CLR+3.5.21022%3B+.NET+CLR+1.0.3705%3B+.NET+CLR+1.1.4322%29&vref=&lp=%2F'; if( screen.width > 10 && screen.height > 10 ){ $.ajax({type:'POST', url: 'https://www.SoftwareDiscoveryIndex.org/ajax/track.php', data: querystring}); } }); </script> </head> <body class="homepage"> <a name="top"></a> <!--outer container--> <div id="outer_container"> <div id="handshake"><a href="https://www.SoftwareDiscoveryIndex.org/index.php?id=1&reveal=yes&view_only=yes" style="width:100%; height:100%; display:block;"></a></div> <!--main container--> <div id="main_container" class="rounded-corners"> <!--top--> <div id="top" > <!-- logo link home either text or display block link --> <div id="logo_container"> <div id="logo_text"><a href="https://www.SoftwareDiscoveryIndex.org/" class="logo" title="Home">SoftwareDiscoveryIndex.org<div id="logo2_text"></div></a> </div> </div> <!--center_top container--> <div id="header_center_top"> </div> <!--close center_top container--> <div class="clear"></div></div> <!--close top--> <!--hor nav--> <!--close hor nav--> <!--content--> <div id="content_container_wrapper"> <div id="content_container"> <h1 class="page_title" align="center">NIH Software Discovery Index Meeting Report</h1> <p style="text-align: center;"><br /> <img alt="" src="/images/abstract-biomedical-banner.jpg" style="width: 936px; height: 216px;" /></p> <div style="padding:10px; margin:10px 10px 10px 10px; background:#ebe8e9; border-radius:10px; border:#ccc solid 1px;">This website was created to compliment the Software Discovery workshop held in May 2014 which explored the challenges and opportunities associated with citing, tracking, and sharing biomedical software. When the domain expired all this information was no longer accessible except through its archived pages. When I discovered the domain had become available I immediately bought it with the intent of restoring as much of its original content as possible from its 2014 archived pages. Unfortunately, not all content was available, but at least I was able to post the Software Discovery Index Meeting Report along with some comments from interested people. <p>My background is in science and I worked for a number of years in the field during research for green technologies. I now work at the consumer end of green technology for a company that sells wholesale and retail janitorial supplies. I am pleased they have a strong representation of eco friendly, "green" wholesale paper products including a line of national brand bulk paper towels. Take for instance C-Fold paper towels. These are the type you find in the dispensers common in public bathrooms. The Boardwalk Green Seal C-Fold towels are Green Seal certified, made from 100% recycled materials with 43% post-consumer waste content. Some of their toilet papers are processed chlorine free, containing 10% post-consumer and 100% total recovered material. They also carry a number of eco friendly cleaning products. I'm glad to see inroads into the green market at such a large online site. But there is still much more that needs to be done.</p> </div> <h2 align="center"><strong>INTRODUCTION</strong></h2> <p>The National Institutes of Health (NIH) (website: datascience.nih.gov/bd2k) , through the Big Data to Knowledge (BD2K) initiative, held a workshop in May of 2014 to explore challenges facing the biomedical research community in locating, citing, and reusing biomedical software. The workshop participants examined these issues and prepared this report summarizing their findings.</p> <p>The constituents with the potential to benefit from improved software discoverability include software users, developers, journal publishers, and funders. Software developers face challenges disseminating their software and measuring its adoption. Software users have difficulty identifying the most appropriate software for their work. Journal publishers lack a consistent way to handle software citations or to ensure reproducibility of published findings. Funding agencies struggle to make informed funding decisions about which software projects to support, while reviewers have a hard time understanding the relevancy and effectiveness of proposed software in the context of data management plans and proposed analysis.</p> <p>This document summarizes recommendations generated from an NIH Software Discovery Meeting held in May 2014. We are now requesting comments from the larger community. We have contacted a broad set of constituents who represent software users, software developers, NIH staff, electronic repositories, and journal publishers.</p> <p>Though numerous changes are needed to address all these gaps, the workshop identified one fundamental prerequisite for success: an automated, broadly accessible system enabling comprehensive identification of biomedical software.</p> <p>This objectives of this “Software Discovery Index” would be:</p> <ol> <li>to assign standard and unambiguous identifiers to reference all software,</li> <li>to track specific metadata features that describe that software, and</li> <li>to enable robust querying of all relevant information for users.</li> </ol> <p>If broadly used, this Software Discovery Index will form a cornerstone in a software ecosystem that benefits software developers, software users, journal publishers, and funding agencies.</p> <p>The workshop attendees agreed that technical resources exist to create both this ecosystem and the needed tools to leverage it. The success of such efforts, however, depends on their acceptance by the scientific community: software developers must obtain identifiers for their software; users must cite software in their publications; journals must leverage and expose these citations; and if this is used properly, and judiciously, funding agencies should use this new wealth of information to shape funding decisions and long-term planning. It is only when each constituency sees benefits of engaging in this effort that significant progress can be made.</p> <p>The ultimate goal of this effort is to ensure all publicly funded biomedical software is highly accessible to the research community. Making software easier to find, easier to cite, and easier to reuse are all necessary steps. It is also critical, however, to support the continued development and availability of software tools. Without access to both the tools and the scientific literature describing their use, the research community will not be able to select and use the best tools. Without tools maintained in common, open-access repositories such as GitHub and SourceForge, improvements to existing tools will be hampered. In all these areas, better support for software can help maximize the impact of NIH’s investment in biomedical research.</p> <h2><strong>A. FRAMEWORK SUPPORTING THE SOFTWARE DISCOVERY INDEX</strong></h2> <p>The workshop identified many potential characteristics and features for an ecosystem in which users can locate, cite, and reuse software. As discussed at the workshop, one prerequisite for such an ecosystem is the use of unique identifiers are obtained by developers and linked to software wherever it is hosted.</p> <h3><strong>Unique identifiers</strong></h3> <p>Unique identifiers for biomedical software are critical for all that follows. The specific system of identifiers used is of far less importance than the adoption of those identifiers among software developers, software users, and publishers. Even so, however, the choice of identifiers could make it easier or harder to meet the needs of each of these communities.</p> <p>The temporally dynamic nature of software development makes unambiguous identification difficult. Individual software packages may have many versions, may be branched along different development paths, and may be bundled into collections with other packages. Identifiers must operate across all of these cases, both disambiguating and linking related tools.</p> <p>The system of identifiers should also enable the association of meta data to software. The metadata so associated should facilitate the identification of scientifically relevant software packages. Collecting this information as a static catalog or set of web pages runs the significant risk of perpetuating stale metadata. In facing that challenge, the open-source software community has developed multiple ways to capture metadata on projects with minimal duplication. The most common approach is to define a format in which the project metadata can be stored as part of the project itself and then scraped by any interested parties. This means that the software developers only have to provide the metadata once, enabling the Software Discovery Index and other interested parties to scrape and use it. It also ensures that updates by the software developers are reflected in all repositories. In this effort, controlled vocabularies and ontologies may prove to be useful, but should not be the primary focus of the initial effort.</p> <h3><strong>Connections to publishers</strong></h3> <p>There is increasing recognition within the scientific community that recording how software is used is a critical part of the scientific record. The dissemination of scientific results, however performed, must unambiguously describe the software used to generate those results and the steps performed. With publications currently the lingua franca for disseminating biomedical research results, connections with journal publishers will be essential for this effort.</p> <p>Comprehensively and efficiently tracking the use of software in research requires a new standard for software citations. At present, most software is cited indirectly by citing either a publication or a URL where the software is described. Citing publications leverages the existing publication citation infrastructure, but it only enables citation of software described in publications. Even software described in publications, if actively-developed, is likely to cycle through many more released versions than publications. URLs pointing at descriptions of software not only fail to provide a standardized mechanism for tracking versions and metadata, but also frequently break as documentation and source code move. To support reproducibility and archiving, a persistent mechanism for citing software, even software no longer being actively developed, is critical.</p> <p>A consistent system of unique identifiers for software and an API for querying those identifiers will enable a better system for citing software. When used in publications, these identifiers would make it possible to identify all publications using a particular software tool. Retrieving the citations from publications can be accomplished through direct submission from journals, extraction by MEDLINE, and full-text mining.</p> <p>One major initiative that aims to address this issue is the use of Research Resource Identifiers (RRIDs), currently underway at FORCE11 and lead by a partnership between the University of California, San Diego, and Oregon Health & Science University. The RRID project makes it easier to track key research resources within the biomedical literature by ensuring that authors provide unique identifiers (RRIDs) for each resource used to produce the results of a published study. The initial pilot project was launched in February 2014 with over 30 journals agreeing to ask authors during submission to provide RRIDs for antibodies, for genetically modified animals, and for software tools/databases. The project established a centralized portal (http://scicrunch.com/resources) that aggregates accession numbers from authoritiave registries for each of these types of resources and enables searching by identifiers.</p> <p>Digital Object Identifiers (DOIs) represent another broadly-used class of identifier. DOIs are widely used in publishing and there are ongoing efforts to leverage their capabilities for software citations. One significant initiative is a collaboration between Mozilla, figshare, GitHub, and Zenodo that allows software developers to easily mint DOIs for their tools<sup>.</sup></p> <p>Ultimately, successful use of unique identifiers for software requires not only a structure, but also social adoption. No tracking system will work unless authors begin properly citing the software used in their research. Funding agencies have set a precedent in driving public access among grantees that may be useful in encouraging adoption. Pilot projects have shown that both journals and authors recognize the need for such a system and are willing to adjust workflows to accommodate it.</p> <h3><strong>Use cases</strong></h3> <p>The combination of unique identifiers for software and the use of those identifiers in publications will enable the creation of a rich dataset of software relevant to the research community. This dataset will be captured through the Software Discovery Index, consolidating data on software packages and their use. The Index would not be a new repository for software code, but rather a resource collecting data from many repositories, publishers, and other sources. This Index, though a highly tangible aspect of the effort, is likely to be highly susceptible to feature creep. It is essential, therefore, to select a subset of relevant features appropriate for the next phase of implementation. Not all of the features described here are likely appropriate for the next phase of implementation.</p> <p>One obvious function of the Discovery Index is to index metadata describing software. The metadata selected for inclusion must be carefully considered for relevance to various users. A selection of metadata fields are listed in Appendix 1. Appropriate metadata, identified by a broad community, should be useful for a range of systems both within and beyond NIH.</p> <p>Aggregating data across multiple sources will help make software from multiple sources more comparable, enabling the calculation of software ratings and utility scores. These scores should depend on a range of criteria, including such factors as citations in the scientific literature, documentation, codebase activity, and user community vitality. Though capturing metrics on these attributes will require a significant development effort, this information has the potential to be of tremendous value to the scientific community. Much additional metadata is likely to be of value to the research community even if it is not currently amenable to quantitative comparisons between different software. Even without quantitative measures of software reliability, there are many reliability indicators that would be worth tracking. Completeness of documentation can be measured at several different levels, including the ability to install the code, or understand the impact of changing run-time variables. The presence of unit or integration tests are critically important, but currently impractical to rigorously measure. Inclusion of benchmarking results can help describe the operation of software. Having metrics such as these available, whether or not they are factored into a reliability score, will help researchers selecting software.</p> <p>As a secondary benefit, exposing the results of various measures may encourage developers to invest in documentation, unit tests, benchmarking, and other best practices. With benchmarking in particular, the Index has the potential to simplify benchmarking by providing gold standard datasets against which benchmarks could be run. Ultimately, if the Index captures multiple measures of software utility and quality, it should be possible to offer certification levels for software to signal compliance with various best practices. Such recognition, already being examined, could both help users wishing to find software and encourage software developers to follow those best practices.</p> <p>Supporting reproducibility is a critical need and an area where the Index is likely to grow significantly with time. While initially citations alone would be unlikely to provide enough information to enable full reproducibility of published results, they do provide a framework for documenting not only the software used, but also the environment and parameters. In time, one of the major contributions of the Index may be in improving the reproducibility of published analysis.</p> <p>Finally, to maximize the utility of this index, it is critical that its information be exposed via both a website and an API. The website should provide a convenient point of entry that allows various faceted searches and browsing software tools. This website is expected to evolve significantly as this effort matures, but it is important that the first iteration is usable and streamlined. Over the long term, however, the API is likely to be at least equally important as the website. It is likely that websites serving specific research communities will wish to provide their own filtered views of the data, and this should be encouraged through an API. Moreover, other resources such as Synapse, GitHub, Zenodo, figshare, SciCrunch/NIF, and others may wish to expose their software to this index, and the API should enable this as well. A thoroughly-documented and usable API for both providing data to this aggregator and retrieving data from it will be critical to its long-term success.</p> <p>The use cases briefly outlined here provide only a small slice of the likely eventual functionality for a successful Software Discovery Index. As the index and citation patterns expand, the novel datasets generated should be further uses not yet planned.</p> <h3><strong>Complementarity with the Data Discovery Index</strong></h3> <p>The Data Discovery Index (DDI) is a NIH Big Data to Knowledge (BD2K) project. The DDI will enable investigators discover, access, and cite biomedical big data. The DDI aims to cut across disciplines and provide an index that will broadly serve across all NIH investigators. We expect that the Software Discovery Index will be fully compatible with the DDI, with the goal of allowing DDI and Software Index objects appearing in electronic journal articles, and enabling comprehensive retrieval of both data and the software that is utilized to analyze or produce these datasets.</p> <h2><strong>B. CHALLENGES AND REMAINING QUESTIONS</strong></h2> <p>The framework proposed above consists of endorsing identifiers, collaborating with publishers, and developing a Software Discovery Index. Each of these tasks carries particular challenges. Some of these challenges must be solved now, while others should be considered and possible solutions proposed in the first iteration of this effort.</p> <p>Two important early needs are to define the scope of this project and to provide a help desk. A key element of defining the scope of this initial project will be deciding what software should be covered. Though ultimately the system should not limit itself to NIH-supported software, a limited scope could be useful in the early stages. The help desk should help users navigate this system. Software developers, in particular, will need guidance on obtaining unique identifiers for their software and in crafting useful metadata files. Other users are also likely to benefit from assistance locating, citing software, and tracking software.</p> <h3><strong>Defining relevant software</strong></h3> <p>Defining relevant software will be a challenge for this effort. Biomedical researchers use a tremendous amount of software that does not need to be captured in this effort. It is relatively clear that no citation is necessary for a text editor, even its features may have greatly helped a researcher. Likewise, it is relatively clear that the statistical analysis package used to analyze a dataset should be cited. A great deal of biomedical software, however, falls between these two extremes. Many researchers use a few lines of script to store parameters for command line programs. Sometimes, those simple scripts grow and become tools in their own right. In other cases, exploratory tools may have been critical for generating initial hypotheses, but not have been used to generate or analyze the published data. It will be important to balance completeness with navigability. Multiple avenues exist for achieving this and it will be important to select with care the approach for the project.</p> <h3><strong>Integrating with other repositories</strong></h3> <p>Aggregating data from multiple sources, though it opens major opportunities to improve software development and design, also requires integration with multiple repositories. The goal of the Software Discovery Index is not to replace existing software repositories, but rather to pull as much information from them as possible and to present that information in a consistent and useful form. This is similar to the role that PubMed plays for journals – PubMed aggregates the results and provides them in a consistent form, but does not oversee curation or peer review. Similarly, the repositories will likely be the ones that ensure standard metadata and provide some degree of curation. This will require strong relationships with multiple existing repositories and a willingness to work with new repositories that contain relevant software.</p> <h3><strong>Evaluating progress and distinguishing this from other efforts</strong></h3> <p>This system is not the first attempt to create an Software Index for NIH-supported software, and we should learn from prior efforts. For example, NIH dedicated significant support to the BioSiteMaps effort. Numerous researchers have created lists of significant software in their own fields, for example curated projects like the Neuroimaging Tools and Resource Clearinghouse (NITRC). Finally, the RRID project and underlying Neuroscience Information Framework Resource Registry have been broadly populated and are used by a broad community. It will be critical to consider what distinguishes this effort from previous efforts as well as any overlap in order to define metrics for success. Some of the key features of this effort that distinguish it from prior efforts include the automated indexing of software, the integration with multiple registries, and the provision of APIs enabling the creation of community-specific user interfaces.</p> <h2><strong>C. IMPLEMENTATION ROADMAP</strong></h2> <p>Below is a preliminary list of milestones involved in implementing a Software Discovery Index:</p> <ul> <li>Define a checklist for the Minimal information about software (see Appendix 1).</li> <li>Develop and implement methods to assign unique identifiers to software systems, leveraging existing approaches where possible.</li> <li>Establish and maintain an API for searching, browsing, entering data, and interacting with journals.</li> <li>Establish and maintain a facile and streamlined website with search and browsing capability for software tools (see Appendix 2 for a selection of use cases).</li> <li>Partner with journal editors to implement the selected unique identifiers in electronic publications. At a minimum this would require identifying relevant journals, developing file formats and APIs for data exchange, and developing documentation for authors.</li> <li>Establish and maintain an Advisory Working Group of international members of the user community, software developers, software repositories, other relevant electronic repositories, and journals.</li> <li>Engage effectively with the Data Discovery Index.</li> <li>Implement performance metrics for the Software Discovery Index (see Appendix 3). Summary results should be made available publically while detailed results should be made available to funders and advisory groups.</li> <li>Promote the Software Discovery Index in journal editorials, conferences, social media, and scientific publications.</li> </ul> <p><strong>D. CONCLUSIONS</strong></p> <p>The Software Discovery Index Meeting of May 12-13, 2014 was a valuable forum for many important discussing. Participants agreed that the unprecedented abundance of electronically encoded information such as `omic data, imaging, and EHR, the software required to manage and understand this data is has also become increasingly critical to biomedical research. Properly documenting this software is critical.</p> <p>Meeting attendees also agreed that software is no longer incidental to the data; the systems used to produce and analyze raw data must be indexed to support analysis reproducibility. Due diligence towards reviewing existing projects, including the RRID project, will provide valuable insights into how this system should operate. Assigning universal locators to software enables significant improvements in the processes for finding, citing, and reusing software. This workshop proposed the use of unique identifiers for software packages, the formation of collaborations with Publishers to track software used in publications, and the creation of a Software Discovery Index to provide information on software packages. If successful, an implementation of these three efforts would benefit software developers, software users, journal publishers, and funding agencies.</p> <h2><strong>E. APPENDIXES</strong></h2> <h3><strong>Appendix 1: Minimal information about software (MIAS)</strong></h3> <p>A common set of metadata fields are critical for useful indexing. If this effort only provides refined free-text searching capabilities, it will not be a major improvement over currently-available resources. It is necessary, therefore, to define a key set of minimal fields can that provide maximum value. At the workshop, the following fields were described as candidates for inclusion in this list:</p> <ul> <li>Persistent identifier</li> <li>Software title</li> <li>Software version</li> <li>Software license</li> <li>Links to code repository</li> <li>Human-readable synopsis</li> <li>Author names and affiliations</li> <li>Terms to describe software objectives or functions, and/or the following two bullets (controlled by an appropriate ontology)</li> <li>Formats for data inputs and outputs</li> <li>Platform, environment, and dependencies</li> <li>Associated grants and publications</li> </ul> <h3><strong>Appendix 2: Use cases</strong></h3> <ul> <li>Developer: A developer registers their software, she is able to track and quantify all use of their software in scientific publications, through comprehensive and accurate citation of the index-associated identifier. With the ability to find similar types of software packages (e.g., other assembly programs), she would also identify benchmarking data sets, and other related software development efforts.</li> <li>User: An NIH funded researcher is seeking software for analysis. They are able to identify the most appropriate software relevant for their study on their data on their computer systems and objectives, and be provided with all information necessary to locate, obtain, and deploy the software.</li> <li>NIH: A program officer can identify both the creation and the use of all software funded by a grant they have awarded, analogous to how they can track all papers and citations to those papers funded by a grant they have awarded. They can also identify similar or overlapping products. Review panels can assess software choices in funding proposals and data management plans.</li> <li>Publisher: A publisher can associate software with their publications during & for peer review and upon publication for citation. They can also pull & display metrics related to all the research objects surrounding the article, including software based on the software identifier.</li> </ul> <h3><strong>Appendix 3: Metrics and milestones</strong></h3> <p>It is critical to define metrics for this effort. These metrics should be evaluated both in absolute terms and in relative terms, monitoring the growth with time. These metrics are particularly significant because this is not the first effort to make biomedical software more accessible to researchers. This effort will face many of the same challenges faced by previous efforts and it is critical to closely monitor whether it is accomplishing its purpose. Specific metrics proposed for the initial effort included:</p> <ul> <li>Number of developers contributing software</li> <li>Number of software records created</li> <li>Software identifiers appearing in and extracted from publications</li> <li>Links from publications to software records</li> <li>Links between indexed software and other resources, people, and data</li> <li>Annotation of existing collections of software packages (e.g., Bioconductor)</li> <li>The number of interoperating resources, including repositories, aggregation resources, and user forums</li> <li>The use of the APIs to re-package the data for specific use cases</li> <li>The proportion of NIH-supported software tracked by the software discovery index</li> </ul> <p>Tracking these metrics will provide insight into the progress of this effort. Progress against these milestones should, wherever possible, be evaluated against milestones. Specific milestones could include the fraction of NIH-supported software included in the first year, the time for machine-actionable links to software in PubMed, and the time for API establishment.</p> <h3><strong>Appendix 4: Existing software indexes</strong></h3> <p>There are numerous existing software indexes serving specific communities, many of unrelated to biomedical researchers. Some of the challenges that this effort will face have also been addressed by these indexes.</p> <p>There are existing package management systems, notably RPM and dpkg for Linux. These tools facilitate the installation, upgrading, and uninstallation of software packages. Both systems have ways to unambiguously track software packages and ways to aggregate data on those packages, significant requirements articulated at this workshop. It is also interesting to note that these are both low-level tools and that users typically interact with them via higher level interfaces. This sort of modular model fits well with what was described at the workshop, with its focus on providing an extendable framework that others can leverage. This model differs from that famously employed in the Apple App Store and Google Play, where the software is directly hosted and managed on the index itself. The index, as described here, would be a lower-level construct that supports various package management functions but does not itself perform those functions.</p> <p>SciCrunch/NIF/RRID: The Neuroscience Information Framework, (http://neuinfo.org) is a project of the NIH Blueprint Consortium that has been surveying, cataloging and federating data and resources (tools, materials, services) of relevance to neuroscience since 2008. It has maintained and populated the NIF Registry, a high level metadata catalog of research resources, currently comprising over 11,000 resources, and tracked them over the past 6 years. Through its unique data ingestion and query platform, it has created a search engine for data that searches across over 200 independent databases comprising over 800 million records. Although NIF was developed for neuroscience, it has expanded well beyond primary neuroscience resources to biomedical resources as a whole. Thus, the software sitting behind NIF was rechristened SciCrunch. SciCrunch allows different communities, e.g., NIF, to use the same infrastructure and data sources to create their own communities. The SciCrunch Registry provides the unique identifiers for software tools and databases for the Resource Identification Initiative (RRID project; http://scicrunch.com/resources). SciCrunch aggregates software tools from multiple repositories, e.g., NITRC. It utilizes authoritative identifiers where possible, and assigns an identifier when the source repository does not.</p> <p>Participation in the RRID project was voluntary, i.e., not a condition of publication, and was requested by the journals through an email request to the author. The project deliberately did not require journals to modify their journal submission system in order to allow broad participation in the project. To date, ~50 papers have appeared from 11 different journals that use RRID’s. Over 200 RRID’s have been reported. The FORCE11 working group is collecting data regarding the use of RRIDs in the literature and is making it freely available.</p> <p>The error rate to date is ~7%. Papers using RRID’s can be retrieved from Google Scholar by searching for a particular ID (Figure 2). A resolving service has also been developed such so that 3rd party tools can utilize the RRID’s to link to a resolvable record as well as to map identifiers where needed. Automated routines based on NLP as being developed to recognize RRID’s and to suggest appropriate RRID’s based on the resources described. Currently, RRID’s are only assigned at the level of the software tool or data resource, that is, it does not specify versioning information. This was a calculated decision, as the primary objective of the RRID pilot was to determine if unique identification of software and other resources would be achievable by publishers and authors. The RRID project in the future will aim to include more detailed and machine actionable information as per outcomes of these and other related community discussions.</p> <p>Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC): Since 2006, the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC) has provided a comprehensive support infrastructure for resources, including software, in the neuroimaigng domain (including MRI, PET, EEG, MEG, SPECT, CT and optical neuroimaging tools and resources). NITRC fosters a user-friendly clearinghouse environment for the neuroimaging informatics community. NITRC’s goal is to support researchers dedicated to enhancing, adopting, distributing, and contributing to the evolution of previously funded neuroimaging analysis tools and resources for broader community use. Located at www.nitrc.org, NITRC promotes software tools, workflows, resources, vocabularies, test data, and now, pre-processed, community- generated data sets (1000 Functional Connectomes, ADHD-200) through its Image Repository (NITRC-IR). NITRC gives researchers greater and more efficient access to the tools and resources they need, including: better categorizing and organizing existing tools and resources via a controlled vocabulary; facilitating interactions between researchers and developers through forums, direct email contact, ratings and reviews; and promoting better use through enhanced documentation.</p> <p>nanoHUB.org: Starting in 2002, the NSF-sponsored Network for Computational Nanotechnology established a web site at nanoHUB.org to support the National Nanotechnology Initiative. Any user within the community can contribute a simulation/modeling or analysis tool to this platform. Tools are not only cataloged, but hosted, so that any user can run the tool through the web via the click of a button–without having to download or install any software. In 2013, more than 13,100 users launched some 500,000 simulation jobs using more than 340 different simulators contributed by the community and installed on nanoHUB. These tools have been used by 22,649 students across 1,165 courses at 185 institutions. nanoHUB also hosts more than 4,000 other resources—including seminars, tutorials, animations, and even complete courses—that help to document the tools and educate new users. In the last 12 months alone, nanoHUB served more than 300,000 unique users with this content, and that number has been doubling every 18 months. In June 2011, the National Science and Technology Council’s Materials Genome Initiative for Global Competitiveness highlighted nanoHUB as an exemplar of “open innovation” that is critical for global competitiveness. The HUBzero software that powers nanoHUB.org is available as open source, and more than 60 other projects have used the same software to create similar “hubs” for their own scientific community.</p> <p style="text-align: center;">****</p> <h2 align="center"><strong>Comments</strong></h2> <p>•<strong> Martin Fenner says:</strong></p> <p>October 7, 2014 at 7:57 pm</p> <p>The workshop and the report come very timely, as the scholarly community is working in numerous initiatives to make scientific software more discoverable and to give software authors due credit. The importance of persistent identifiers and a central searchable index with metadata can’t be stressed enough and I applaud this group for this activity. As always, the devil is in the details, and I have a number of comments/concerns regarding the report.</p> <p>Persistent identifiers are a social contract. For them to work you need a set of agreed principles between the organization issuing the identifiers and the person or organization using the identifier with the software. For journal articles (publishers) and datasets (data centers) these roles are clear, but it is not clear to me from the report of who would do that for the software. A software developer, or a code repository (at least popular commercial repositories such as SourceForge or Github) is the wrong partner for this, as they have other priorities than thinking about how to keep the software available for the next 20 years or more.</p> <p>One possible approach is that data repositories such as figshare or Zenodo take that role, building on the work they have already done. Or we see more specialized software repositories evolve that cater to those needs and with additional features compared to traditional data repositories. I see one of the biggest gaps right now in repositories for long-term preservation of software, using an intelligent integration with code repositories. Without this partner with a long-term perspective persistent identifiers don’t work.</p> <p>Another consideration is duplication of effort. I would argue that it would be a bad idea to invent a new persistent identifier just for software. I personally think that DOIs are perfect for this, and I would wish that the report would use stronger language to support DOIs for this activity. (DataCite and CrossRef) DOIs have required metadata (e.g. version, license, contributors, related identifiers) stored in a central, searchable metadata store, and in particular DataCite has considered software in its metadata schema. DOIs also work with a lot of existing infrastructure, the extra effort to include special considerations for software would be much smaller than building a new infrastructure for persistent identifiers for software.</p> <p>Lastly, software should be cited the same way as books, articles and data, with the citation appearing in the references list (the citation to the software, not only the citation to a paper describing the software). This is not only something that authors, readers, publishers are familiar with, but this makes it much easier to actually find these software citations. Not only are software citations within the body text very difficult to find if the content is not open access, but the variations in formatting (software name only or also link, in many different places of the manuscript, etc.) make it very hard for automated tools to find these software citations.</p> <p>Something that DataCite is not providing is a Software citation index, and a service build based on DOIs should be developed to fill that void. This service should also cooperate with CrossRef, as we hopefully will see an increasing number of software citations in the reference lists of scholarly papers. These software citations should not be extracted only from journal articles, but wherever they happen, most importantly other software packages and in association with datasets. Another important activity that is missing, and is mentioned in the report, is a listing of appropriate software repositories (both code repositories and repositories for long-term preservation). Databib and re3data, together with DataCite, are doing this for data repositories, and a software repository discovery index can either be a part of those services, or a separate activity.</p> <hr /> <p>•<strong> Vijayaraj Nagarajan says:</strong></p> <p>October 8, 2014 at 12:59 pm</p> <p>Excellent point about the potential problems with relying on a third-party repository.</p> <p>NCBI has the ability and is already successfully performing archiving of genome scale data. If not for all of the software, why not create a small facility under the hood of NCBI to archive the public funded software ?</p> <p>Such a system could be more robust, with easy integration in to other data components of NCBI. This would also leverage on the expertise that NCBI has acquired all these years in creating an excellent archiving ecosystem. I don’t think infrastructure resources would be of a concern for such an in-house repository, considering its scope and size in relation to the existing archives of NCBI.</p> <hr /> <p>• <strong> Steven Salzberg says:</strong></p> <p>October 8, 2014 at 9:09 pm</p> <p>This is a good suggestion: NCBI is already set up to handle repositories, and it would be a simple matter for them to create a small software repository. I see no need for anything more elaborate, if we even need this at all. Most good software gets published and we have papers to cite already.</p> <hr /> <p>•<strong> Istvan Albert says:</strong></p> <p>October 7, 2014 at 7:59 pm</p> <p>What seems to be missing from the MIAS fields is any mention of minimal user support. Shouldn’t there be some requirement that ensures that there are avenues available to assisting someone having problems or needing advice?</p> <p>This is why Biostars exists https://www.biostars.org/ and the hundreds of thousands of visitors that the we get monthly indicate that the problem of software support is just as important as that of categorizing it properly.</p> <hr /> <p>•<strong>W. Trevor King says:</strong></p> <p>October 7, 2014 at 9:26 pm</p> <p>I think encouraging researchers to cite software they used to conduct their research and analysis is important, but I’m happy leaving the threshold up to the researchers (e.g. not listing their text editor, but listing their experiment-controls and stats packages with version numbers). It also makes sense to require publically-funded software to be archived somewhere to ensure it will be accessible in the future. Beyond that, I don’t see much need for additional tooling. Won’t users be able to find software by looking through the papers written by folks doing similar research or through a generic search engine?</p> <p>I don’t see how any of this will help “Review panels can assess software choices in funding proposals and data management plans.” Assessing software seems much more complicated and subjective than something you can condense into some indexed metadata. Is there an active community using and developing this software? How responsive are the maintainers to bug reports and patch submissions? I think assessing that sort of thing needs a more organic touch.</p> <p> </p> <p>•<strong> Vijayaraj Nagarajan says:</strong></p> <p>October 8, 2014 at 3:05 pm</p> <p>This is why it might of great help to create our own first of its kind in-house biomedical software repository. This would enable us to put that “organic touch”…. as being done in the NCBI GEO repository and indexing. NCBI does an amazing job of giving this organic touch to all of the submissions made to this repository, which has made GEO an overwhelming success.</p> <p> </p> <div /*id="page_footer"*/ style="font-size:24px; padding:20px; text-align:center;">SoftwareDiscoveryIndex.org</div> <div style="clear:both;"></div></div> </div> <!--close content--> <!--hor nav--> <!--close hor nav--> <!--footer--> <!--close footer--> <div style="clear:left;"></div></div> <!--close main container--> </div> <!--outer container--> </body> </html>