CINXE.COM
一个基础模型可用于跨九种模式的生物医学物体的联合分割和检测及识别—小柯机器人—科学网
<!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>一个基础模型可用于跨九种模式的生物医学物体的联合分割和检测及识别—小柯机器人—科学网</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta name="keywords" content=10.1038/s41592-024-02499-w,Nature Methods /> <link href="/AInews/css/css.css" rel="stylesheet" type="text/css" /> <style> a{color:#06c;} </style> <script type="text/javascript"> function browserRedirect() { var sUserAgent= navigator.userAgent.toLowerCase(); var bIsIpad= sUserAgent.match(/ipad/i) == "ipad"; var bIsIphoneOs= sUserAgent.match(/iphone os/i) == "iphone os"; var bIsMidp= sUserAgent.match(/midp/i) == "midp"; var bIsUc7= sUserAgent.match(/rv:1.2.3.4/i) == "rv:1.2.3.4"; var bIsUc= sUserAgent.match(/ucweb/i) == "ucweb"; var bIsAndroid= sUserAgent.match(/android/i) == "android"; var bIsCE= sUserAgent.match(/windows ce/i) == "windows ce"; var bIsWM= sUserAgent.match(/windows mobile/i) == "windows mobile"; if (bIsIpad || bIsIphoneOs || bIsMidp || bIsUc7 || bIsUc || bIsAndroid || bIsCE || bIsWM) { document.write("<div style='padding-left:4em;text-align:center'><a href=\"http://wap.sciencenet.cn//mobile.php?type=AInews&op=detail&id=124077&mobile=1\" style='font-size:2em;color:#ba1413; line-height:2em;font-size:0.8em'>点击此处切换为手机版网页</a></div>"); } else { //window.location= '电脑网站地址'; } } browserRedirect(); </script> </head> <body> <div class="main"> <div id="wrapper"> <!--登录--> <iframe src="https://blog.sciencenet.cn/plus.php?mod=iframelogin" style="width:990px; height:32px; " scrolling="no" frameborder="0"></iframe> <style> .logo *{margin:0; padding:0;} .logo img{border:0;} .logo{margin:0 auto; background: url(/images/logobg.gif) repeat-x left; height:84px; color:#FFF; width:990px;font: normal 13px Arial, Helvetica, Sans-Serif,"宋体";line-height: 160%;text-align:left;} .logo a{ color:#FFF;text-decoration:none;} .logo a:hover{color:#FFF ; text-decoration:underline;} .logo .lg{ float:left; width:231px; height:84px;} .logo .rg{ float:left; width:728px;margin:10px 0; border-bottom:1px solid #bd5354; padding-bottom:5px;} .logo .rg01{float:left; width:728px;font-size:13px; margin-top:8px;} .logo .rg01 .input01{ width:100px; height:20px; background:#FFF; margin-right:5px; border:1px solid #C00; line-height:20px;} .logo .rg01 .button01{ background:url(/images/button02.jpg) no-repeat ; width:58px; height:21px; border:0px; color:#FFF; font-size:12px; line-height:20px;} .rg02{ float:left; height:84px; width:728px; } </style> <div class="logo"> <div class="lg" ><a href="https://paper.sciencenet.cn/AInews/"><img src="/AInews/images/paper.png" width="231" height="84" alt="科学网论文频道" /></a></div> <div class="rg02" > <div class="rg" ><span style="float:left;"> <a href="https://www.sciencenet.cn/life/" style="color:#fff">生命科学</a> | <a href="https://www.sciencenet.cn/medicine/" style="color:#fff">医学科学</a> | <a href="https://www.sciencenet.cn/chemistry/" style="color:#fff">化学科学</a> | <a href="https://www.sciencenet.cn/material/" style="color:#fff">工程材料</a> | <a href="https://www.sciencenet.cn/information/" style="color:#fff">信息科学</a> | <a href="https://www.sciencenet.cn/earth/" style="color:#fff">地球科学</a> | <a href="https://www.sciencenet.cn/mathematics/" style="color:#fff">数理科学</a> | <a href="https://www.sciencenet.cn/policy/" style="color:#fff">管理综合</a> </span> <span style=" float:right; "> <a href="https://www.sciencenet.cn/upload/app/" style="color:#fff">移动客户端</a> | <a href="https://wap.sciencenet.cn/" style="color:#fff">手机版</a></span></div> <div class="rg01"><span style="float:left; "> <a href="https://www.sciencenet.cn/">首页</a> | <a href="https://news.sciencenet.cn/">新闻</a> | <a href="https://blog.sciencenet.cn/blog.php">博客</a> | <a href="https://news.sciencenet.cn/ys/">院士</a> | <a href="https://talent.sciencenet.cn/">人才</a> | <a href="https://meeting.sciencenet.cn">会议</a> | <a href="https://fund.sciencenet.cn/">基金·项目</a> | <a href="https://paper.sciencenet.cn/">论文</a> | <a href="https://blog.sciencenet.cn/blog.php?mod=video">视频·直播</a> | <a href="https://paper.sciencenet.cn/AInews">小柯机器人</a> | <a href="https://kxxsh.sciencenet.cn/">医学科普</a> </span> <div style="float:right; padding-left:0px;"> <form style="display:inline" method="get" action="https://www.baidu.com/baidu" accept-charset="utf-8" name="f1"> <input style=" width:100px;border:0" maxlength="40" size="31" name="word" /> <input type="submit" style="cursor:pointer; width:56px; font-size:12px; background-color:#880008; border-width:0px; height:16px; color:#fff;" value="本站搜索" /> <input type="hidden" value="2097152" name="ct" /> <input type="hidden" value="3" name="cl" /> <input type="hidden" value="paper.sciencenet.cn" name="si" /> <input type="hidden" value="utf-8" name="ie" /> </form> </div> </div></div> </div> <div class="main" style="clear:both"> <div class="weizhi"> <span class="wl">当前位置:<a href="http://www.sciencenet.cn">科学网首页</a> > <a href="http://news.sciencenet.cn/AInews/">小柯机器人</a> ><span class="red">详情</span></span> </div> <div class="details"> <div class="title2">一个基础模型可用于跨九种模式的生物医学物体的联合分割和检测及识别</div> <div class="tagging">作者:<a href="/AInews/">小柯机器人</a> 发布时间:2024/11/21 15:45:10</div> <div class="dd"><div style='text-align:right'><a href="/Ainews/qikan.aspx?qid=193">本期文章:《自然—方法学》:Online/在线发表</a></div></div> <p> <p> 美国华盛顿大学Sheng Wang等研究人员合作发现,一个基础模型可用于跨九种模式的生物医学物体的联合分割、检测和识别。2024年11月18日,《自然—方法学》杂志在线发表了这项成果。</p> <p> 研究人员提出了一个生物医学基础模型BiomedParse,可以跨越九种成像模式联合进行分割、检测和识别。这种联合学习提高了各个任务的准确性,并启用了新应用,例如通过文本描述分割图像中所有相关物体。为了训练BiomedParse,研究人员通过利用现有数据集中的自然语言标签或描述创建了一个大型数据集,包含超过600万个图像、分割掩码和文本描述的三元组。</p> <p> 研究人员展示了BiomedParse在九种成像模式下的图像分割任务中优于现有方法,且在处理不规则形状物体时的改进更为显著。研究人员还展示了BiomedParse可以同时分割并标注图像中的所有物体。总之,BiomedParse是一个适用于所有主要成像模式的生物医学图像分析的全能工具,为高效、准确的基于图像的生物医学发现铺平了道路。</p> <p> 研究人员表示,生物医学图像分析是生物医学发现的基础。整体图像分析包括如分割、检测和识别等相互依赖的子任务,传统方法通常将这些任务分开处理。</p> <p> <strong>附:英文原文</strong></p> <p> Title: A foundation model for joint segmentation, detection and recognition of biomedical objects across nine modalities</p> <p> Author: Zhao, Theodore, Gu, Yu, Yang, Jianwei, Usuyama, Naoto, Lee, Ho Hin, Kiblawi, Sid, Naumann, Tristan, Gao, Jianfeng, Crabtree, Angela, Abel, Jacob, Moung-Wen, Christine, Piening, Brian, Bifulco, Carlo, Wei, Mu, Poon, Hoifung, Wang, Sheng</p> <p> Issue&Volume: 2024-11-18</p> <p> Abstract: Biomedical image analysis is fundamental for biomedical discovery. Holistic image analysis comprises interdependent subtasks such as segmentation, detection and recognition, which are tackled separately by traditional approaches. Here, we propose BiomedParse, a biomedical foundation model that can jointly conduct segmentation, detection and recognition across nine imaging modalities. This joint learning improves the accuracy for individual tasks and enables new applications such as segmenting all relevant objects in an image through a textual description. To train BiomedParse, we created a large dataset comprising over 6 million triples of image, segmentation mask and textual description by leveraging natural language labels or descriptions accompanying existing datasets. We showed that BiomedParse outperformed existing methods on image segmentation across nine imaging modalities, with larger improvement on objects with irregular shapes. We further showed that BiomedParse can simultaneously segment and label all objects in an image. In summary, BiomedParse is an all-in-one tool for biomedical image analysis on all major image modalities, paving the path for efficient and accurate image-based biomedical discovery.</p> <p> DOI: 10.1038/s41592-024-02499-w</p> <p> Source: <a href="https://www.nature.com/articles/s41592-024-02499-w">https://www.nature.com/articles/s41592-024-02499-w</a></p> </p> <div class="remarks">期刊信息</div> <div class="dd"><p> <span style="font-weight: bold;">Nature Methods:</span>《自然—方法学》,创刊于2004年。隶属于施普林格·自然出版集团,最新IF:47.99<br /> <span style="font-weight: bold;">官方网址:</span><a href="https://www.nature.com/nmeth/" target="_blank">https://www.nature.com/nmeth/</a><br /> <span style="font-weight: bold;">投稿链接:</span><a href="https://mts-nmeth.nature.com/cgi-bin/main.plex" target="_blank">https://mts-nmeth.nature.com/cgi-bin/main.plex</a></p> </div> </div> </div> <style> #footer{ background:#890f0e; text-align:center; padding:10px; color:#FFF; font-size:12px;} #footer a{color:#FFF; font-size:12px;} </style> <div id="footer"><a href="http://www.sciencenet.cn/aboutus/" target="_blank">关于我们</a> | <a href="http://www.sciencenet.cn/shengming.aspx" target="_blank">网站声明</a> | <a href="http://www.sciencenet.cn/tiaokuang.aspx" target="_blank">服务条款</a> | <a href="http://www.sciencenet.cn/contact.aspx" target="_blank">联系方式</a> | <a href="http://www.sciencenet.cn/RSS.aspx">RSS</a> | <a href="mailto:jubao@stimes.cn">举报</a> | <a href="https://stimes.sciencenet.cn/">中国科学报社</a> <br> <a href="https://beian.miit.gov.cn" target="_blank" > 京ICP备07017567号-12 </a> 互联网新闻信息服务许可证10120230008 京公网安备 11010802032783<br>Copyright @ 2007-<script type="text/javascript">var Date22 = new Date();var year22 = Date22.getFullYear();document.write(year22);</script> 中国科学报社 All Rights Reserved<br> 地址:北京市海淀区中关村南一条乙三号 邮箱:blog@stimes.cn <br> <span style="display:none"> <script type="text/javascript"> var _bdhmProtocol = (("https:" == document.location.protocol) ? " https://" : " http://"); document.write(unescape("%3Cscript src='" + _bdhmProtocol + "hm.baidu.com/h.js%3Fcbf293a46e1e62385b889174378635f1' type='text/javascript'%3E%3C/script%3E")); </script> </span> </div> </div> </div> </div> </body> <script src="http://libs.baidu.com/jquery/1.11.1/jquery.min.js"></script> <script src="http://res.wx.qq.com/open/js/jweixin-1.0.0.js"></script> <script> var url=encodeURIComponent(location.href); $.ajax({ type : "get", url : "http://wap.sciencenet.cn/api/jssdk.php?url="+url,//替换网址,xxx根据自己jssdk文件位置修改 dataType : "jsonp", jsonp: "callback", jsonpCallback:"success_jsonpCallback", success : function(data){ wx.config({ debug: false, appId: data.appId, timestamp: data.timestamp, nonceStr: data.nonceStr, signature: data.signature, jsApiList: [ "onMenuShareTimeline", //分享给好友 "onMenuShareAppMessage", //分享到朋友圈 "onMenuShareQQ", //分享到QQ "onMenuShareWeibo" //分享到微博 ] }); }, error:function(data){ } }); wx.ready(function (){ var shareData = { title: '一个基础模型可用于跨九种模式的生物医学物体的联合分割和检测及识别—小柯机器人—科学网', desc: '一个基础模型可用于跨九种模式的生物医学物体的联合分割和检测及识别—小柯机器人—科学网',//这里请特别注意是要去除html link:location.href, imgUrl: 'http://news.sciencenet.cn/images/logo300.png' }; wx.onMenuShareAppMessage(shareData); wx.onMenuShareTimeline(shareData); wx.onMenuShareQQ(shareData); wx.onMenuShareWeibo(shareData); }); window.onload = function(){ if (typeof WeixinJSBridge == "undefined"){ if( document.addEventListener ){ document.addEventListener('MenuShareTimeline', WeiXinShareBtn, false); }else if (document.attachEvent){ document.attachEvent('MenuShareTimeline', WeiXinShareBtn); document.attachEvent('onMenuShareTimeline', WeiXinShareBtn); } }else{ //WeiXinShareBtn(); } }; </script> </html>