CINXE.COM
Generative AI
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><style> html, body, button { font-family: '__Lato_814572', '__Lato_Fallback_814572', Helvetica Neue, Helvetica, Arial, sans-serif !important; } </style><link rel="canonical" href="https://www.catalyzex.com/s/Generative%20AI"/><style> html, body, button { font-family: '__Lato_814572', '__Lato_Fallback_814572', Helvetica Neue, Helvetica, Arial, sans-serif !important; } </style><meta name="viewport" content="width=device-width, initial-scale=1.0, shrink-to-fit=no"/><title>Generative AI</title><meta name="description" content="Generative AI or generative artificial intelligence refers to a type of AI that can create various types of content including text, audio, music, images, videos, and code. This is powered by large models called foundation models that are trained on massive datasets to perform out-of-the-box tasks including classification, summarization, video and audio comprehension, prediction, Q&A, and more. Browse open-source code and papers on Generative AI to catalyze your projects, and easily connect with engineers and experts when you need help."/><meta property="og:title" content="Generative AI"/><meta property="og:description" content="Generative AI or generative artificial intelligence refers to a type of AI that can create various types of content including text, audio, music, images, videos, and code. This is powered by large models called foundation models that are trained on massive datasets to perform out-of-the-box tasks including classification, summarization, video and audio comprehension, prediction, Q&A, and more. Browse open-source code and papers on Generative AI to catalyze your projects, and easily connect with engineers and experts when you need help."/><meta name="twitter:title" content="Generative AI"/><meta name="twitter:description" content="Generative AI or generative artificial intelligence refers to a type of AI that can create various types of content including text, audio, music, images, videos, and code. This is powered by large models called foundation models that are trained on massive datasets to perform out-of-the-box tasks including classification, summarization, video and audio comprehension, prediction, Q&A, and more. Browse open-source code and papers on Generative AI to catalyze your projects, and easily connect with engineers and experts when you need help."/><meta name="twitter:image" content="https://www.catalyzex.com/favicon.ico"/><meta name="og:image" content="https://www.catalyzex.com/favicon.ico"/><script type="application/ld+json">{"@context":"https://schema.org","@graph":[{"@type":"SearchResultsPage","name":"Generative AI","description":"Generative AI or generative artificial intelligence refers to a type of AI that can create various types of content including text, audio, music, images, videos, and code. This is powered by large models called foundation models that are trained on massive datasets to perform out-of-the-box tasks including classification, summarization, video and audio comprehension, prediction, Q&A, and more. Browse open-source code and papers on Generative AI to catalyze your projects, and easily connect with engineers and experts when you need help.","url":"https://www.catalyzex.com/s/Generative AI","mainEntity":[{"@type":"ItemList","name":"Generative AI papers","numberOfItems":200,"itemListElement":[{"@type":"ListItem","position":1,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/autonomous-ai-imitators-increase-diversity-in","name":"Autonomous AI imitators increase diversity in homogeneous information ecosystems","datePublished":"2025-03-21","author":[{"@type":"Person","name":"Emil Bakkensen Johansen","url":"https://www.catalyzex.com/author/Emil Bakkensen Johansen"},{"@type":"Person","name":"Oliver Baumann","url":"https://www.catalyzex.com/author/Oliver Baumann"}]}},{"@type":"ListItem","position":2,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/the-diagram-is-like-guardrails-structuring","name":"\"The Diagram is like Guardrails\": Structuring GenAI-assisted Hypotheses Exploration with an Interactive Shared Representation","datePublished":"2025-03-21","author":[{"@type":"Person","name":"Zijian Ding","url":"https://www.catalyzex.com/author/Zijian Ding"},{"@type":"Person","name":"Michelle Brachman","url":"https://www.catalyzex.com/author/Michelle Brachman"},{"@type":"Person","name":"Joel Chan","url":"https://www.catalyzex.com/author/Joel Chan"},{"@type":"Person","name":"Werner Geyer","url":"https://www.catalyzex.com/author/Werner Geyer"}]}},{"@type":"ListItem","position":3,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/automating-adjudication-of-cardiovascular","name":"Automating Adjudication of Cardiovascular Events Using Large Language Models","datePublished":"2025-03-21","author":[{"@type":"Person","name":"Sonish Sivarajkumar","url":"https://www.catalyzex.com/author/Sonish Sivarajkumar"},{"@type":"Person","name":"Kimia Ameri","url":"https://www.catalyzex.com/author/Kimia Ameri"},{"@type":"Person","name":"Chuqin Li","url":"https://www.catalyzex.com/author/Chuqin Li"},{"@type":"Person","name":"Yanshan Wang","url":"https://www.catalyzex.com/author/Yanshan Wang"},{"@type":"Person","name":"Min Jiang","url":"https://www.catalyzex.com/author/Min Jiang"}]}},{"@type":"ListItem","position":4,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/can-ai-expose-tax-loopholes-towards-a-new","name":"Can AI expose tax loopholes? Towards a new generation of legal policy assistants","datePublished":"2025-03-21","author":[{"@type":"Person","name":"Peter Fratri膷","url":"https://www.catalyzex.com/author/Peter Fratri膷"},{"@type":"Person","name":"Nils Holzenberger","url":"https://www.catalyzex.com/author/Nils Holzenberger"},{"@type":"Person","name":"David Restrepo Amariles","url":"https://www.catalyzex.com/author/David Restrepo Amariles"}]}},{"@type":"ListItem","position":5,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/position-interactive-generative-video-as-next","name":"Position: Interactive Generative Video as Next-Generation Game Engine","datePublished":"2025-03-21","author":[{"@type":"Person","name":"Jiwen Yu","url":"https://www.catalyzex.com/author/Jiwen Yu"},{"@type":"Person","name":"Yiran Qin","url":"https://www.catalyzex.com/author/Yiran Qin"},{"@type":"Person","name":"Haoxuan Che","url":"https://www.catalyzex.com/author/Haoxuan Che"},{"@type":"Person","name":"Quande Liu","url":"https://www.catalyzex.com/author/Quande Liu"},{"@type":"Person","name":"Xintao Wang","url":"https://www.catalyzex.com/author/Xintao Wang"},{"@type":"Person","name":"Pengfei Wan","url":"https://www.catalyzex.com/author/Pengfei Wan"},{"@type":"Person","name":"Di Zhang","url":"https://www.catalyzex.com/author/Di Zhang"},{"@type":"Person","name":"Xihui Liu","url":"https://www.catalyzex.com/author/Xihui Liu"}]}},{"@type":"ListItem","position":6,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/greeniq-a-deep-search-platform-for","name":"GreenIQ: A Deep Search Platform for Comprehensive Carbon Market Analysis and Automated Report Generation","datePublished":"2025-03-21","author":[{"@type":"Person","name":"Oluwole Fagbohun","url":"https://www.catalyzex.com/author/Oluwole Fagbohun"},{"@type":"Person","name":"Sai Yashwanth","url":"https://www.catalyzex.com/author/Sai Yashwanth"},{"@type":"Person","name":"Akinyemi Sadeeq Akintola","url":"https://www.catalyzex.com/author/Akinyemi Sadeeq Akintola"},{"@type":"Person","name":"Ifeoluwa Wurola","url":"https://www.catalyzex.com/author/Ifeoluwa Wurola"},{"@type":"Person","name":"Lanre Shittu","url":"https://www.catalyzex.com/author/Lanre Shittu"},{"@type":"Person","name":"Aniema Inyang","url":"https://www.catalyzex.com/author/Aniema Inyang"},{"@type":"Person","name":"Oluwatimilehin Odubola","url":"https://www.catalyzex.com/author/Oluwatimilehin Odubola"},{"@type":"Person","name":"Udodirim Offia","url":"https://www.catalyzex.com/author/Udodirim Offia"},{"@type":"Person","name":"Said Olanrewaju","url":"https://www.catalyzex.com/author/Said Olanrewaju"},{"@type":"Person","name":"Ogidan Toluwaleke","url":"https://www.catalyzex.com/author/Ogidan Toluwaleke"},{"@type":"Person","name":"Ilemona Abutu","url":"https://www.catalyzex.com/author/Ilemona Abutu"},{"@type":"Person","name":"Taiwo Akinbolaji","url":"https://www.catalyzex.com/author/Taiwo Akinbolaji"}]}},{"@type":"ListItem","position":7,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/advancing-problem-based-learning-in","name":"Advancing Problem-Based Learning in Biomedical Engineering in the Era of Generative AI","datePublished":"2025-03-20","author":[{"@type":"Person","name":"Micky C. Nnamdi","url":"https://www.catalyzex.com/author/Micky C. Nnamdi"},{"@type":"Person","name":"J. Ben Tamo","url":"https://www.catalyzex.com/author/J. Ben Tamo"},{"@type":"Person","name":"Wenqi Shi","url":"https://www.catalyzex.com/author/Wenqi Shi"},{"@type":"Person","name":"May D. Wang","url":"https://www.catalyzex.com/author/May D. Wang"}]}},{"@type":"ListItem","position":8,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/world-knowledge-from-ai-image-generation-for","name":"World Knowledge from AI Image Generation for Robot Control","datePublished":"2025-03-20","author":[{"@type":"Person","name":"Jonas Krumme","url":"https://www.catalyzex.com/author/Jonas Krumme"},{"@type":"Person","name":"Christoph Zetzsche","url":"https://www.catalyzex.com/author/Christoph Zetzsche"}]}},{"@type":"ListItem","position":9,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/conversational-user-ai-intervention-a-study","name":"Conversational User-AI Intervention: A Study on Prompt Rewriting for Improved LLM Response Generation","datePublished":"2025-03-21","author":[{"@type":"Person","name":"Rupak Sarkar","url":"https://www.catalyzex.com/author/Rupak Sarkar"},{"@type":"Person","name":"Bahareh Sarrafzadeh","url":"https://www.catalyzex.com/author/Bahareh Sarrafzadeh"},{"@type":"Person","name":"Nirupama Chandrasekaran","url":"https://www.catalyzex.com/author/Nirupama Chandrasekaran"},{"@type":"Person","name":"Nagu Rangan","url":"https://www.catalyzex.com/author/Nagu Rangan"},{"@type":"Person","name":"Philip Resnik","url":"https://www.catalyzex.com/author/Philip Resnik"},{"@type":"Person","name":"Longqi Yang","url":"https://www.catalyzex.com/author/Longqi Yang"},{"@type":"Person","name":"Sujay Kumar Jauhar","url":"https://www.catalyzex.com/author/Sujay Kumar Jauhar"}]}},{"@type":"ListItem","position":10,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/truthlens-explainable-deepfake-detection-for","name":"TruthLens: Explainable DeepFake Detection for Face Manipulated and Fully Synthetic Data","datePublished":"2025-03-20","author":[{"@type":"Person","name":"Rohit Kundu","url":"https://www.catalyzex.com/author/Rohit Kundu"},{"@type":"Person","name":"Athula Balachandran","url":"https://www.catalyzex.com/author/Athula Balachandran"},{"@type":"Person","name":"Amit K. Roy-Chowdhury","url":"https://www.catalyzex.com/author/Amit K. Roy-Chowdhury"}]}}]}]}]}</script><link rel="preload" as="image" imageSrcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Ffilter.cf288982.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Ffilter.cf288982.png&w=1080&q=75 2x" fetchpriority="high"/><meta name="next-head-count" content="15"/><link rel="preconnect" href="https://fonts.googleapis.com"/><link rel="preconnect" href="https://fonts.gstatic.com" crossorigin=""/><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><meta name="p:domain_verify" content="7a8c54ff8920a71e909037ac35612f4e"/><meta name="author" content="CatalyzeX"/><meta property="og:type" content="website"/><meta property="og:site_name" content="CatalyzeX"/><meta property="og:url" content="https://www.catalyzex.com/"/><meta property="fb:app_id" content="658945670928778"/><meta property="fb:admins" content="515006233"/><meta name="twitter:card" content="summary_large_image"/><meta name="twitter:domain" content="www.catalyzex.com"/><meta name="twitter:site" content="@catalyzex"/><meta name="twitter:creator" content="@catalyzex"/><script data-partytown-config="true"> partytown = { lib: "/_next/static/~partytown/", forward: [ "gtag", "mixpanel.track", "mixpanel.track_pageview", "mixpanel.identify", "mixpanel.people.set", "mixpanel.reset", "mixpanel.get_distinct_id", "mixpanel.set_config", "manuallySyncMixpanelId" ] }; </script><link rel="preconnect" href="https://fonts.gstatic.com" crossorigin /><link rel="preload" href="/_next/static/media/155cae559bbd1a77-s.p.woff2" as="font" type="font/woff2" crossorigin="anonymous" data-next-font="size-adjust"/><link rel="preload" href="/_next/static/media/4de1fea1a954a5b6-s.p.woff2" as="font" type="font/woff2" crossorigin="anonymous" data-next-font="size-adjust"/><link rel="preload" href="/_next/static/media/6d664cce900333ee-s.p.woff2" as="font" type="font/woff2" crossorigin="anonymous" data-next-font="size-adjust"/><link rel="preload" href="/_next/static/css/21db9cac47ff2c1f.css" as="style"/><link rel="stylesheet" href="/_next/static/css/21db9cac47ff2c1f.css" data-n-g=""/><link rel="preload" href="/_next/static/css/b8053a51356cf568.css" as="style"/><link rel="stylesheet" href="/_next/static/css/b8053a51356cf568.css" data-n-p=""/><noscript data-n-css=""></noscript><script defer="" nomodule="" src="/_next/static/chunks/polyfills-78c92fac7aa8fdd8.js"></script><script data-partytown="">!(function(w,p,f,c){if(!window.crossOriginIsolated && !navigator.serviceWorker) return;c=w[p]=w[p]||{};c[f]=(c[f]||[])})(window,'partytown','forward');/* Partytown 0.10.2 - MIT builder.io */ const t={preserveBehavior:!1},e=e=>{if("string"==typeof e)return[e,t];const[n,r=t]=e;return[n,{...t,...r}]},n=Object.freeze((t=>{const e=new Set;let n=[];do{Object.getOwnPropertyNames(n).forEach((t=>{"function"==typeof n[t]&&e.add(t)}))}while((n=Object.getPrototypeOf(n))!==Object.prototype);return Array.from(e)})());!function(t,r,o,i,a,s,c,d,l,p,u=t,f){function h(){f||(f=1,"/"==(c=(s.lib||"/~partytown/")+(s.debug?"debug/":""))[0]&&(l=r.querySelectorAll('script[type="text/partytown"]'),i!=t?i.dispatchEvent(new CustomEvent("pt1",{detail:t})):(d=setTimeout(v,1e4),r.addEventListener("pt0",w),a?y(1):o.serviceWorker?o.serviceWorker.register(c+(s.swPath||"partytown-sw.js"),{scope:c}).then((function(t){t.active?y():t.installing&&t.installing.addEventListener("statechange",(function(t){"activated"==t.target.state&&y()}))}),console.error):v())))}function y(e){p=r.createElement(e?"script":"iframe"),t._pttab=Date.now(),e||(p.style.display="block",p.style.width="0",p.style.height="0",p.style.border="0",p.style.visibility="hidden",p.setAttribute("aria-hidden",!0)),p.src=c+"partytown-"+(e?"atomics.js?v=0.10.2":"sandbox-sw.html?"+t._pttab),r.querySelector(s.sandboxParent||"body").appendChild(p)}function v(n,o){for(w(),i==t&&(s.forward||[]).map((function(n){const[r]=e(n);delete t[r.split(".")[0]]})),n=0;n<l.length;n++)(o=r.createElement("script")).innerHTML=l[n].innerHTML,o.nonce=s.nonce,r.head.appendChild(o);p&&p.parentNode.removeChild(p)}function w(){clearTimeout(d)}s=t.partytown||{},i==t&&(s.forward||[]).map((function(r){const[o,{preserveBehavior:i}]=e(r);u=t,o.split(".").map((function(e,r,o){var a;u=u[o[r]]=r+1<o.length?u[o[r]]||(a=o[r+1],n.includes(a)?[]:{}):(()=>{let e=null;if(i){const{methodOrProperty:n,thisObject:r}=((t,e)=>{let n=t;for(let t=0;t<e.length-1;t+=1)n=n[e[t]];return{thisObject:n,methodOrProperty:e.length>0?n[e[e.length-1]]:void 0}})(t,o);"function"==typeof n&&(e=(...t)=>n.apply(r,...t))}return function(){let n;return e&&(n=e(arguments)),(t._ptf=t._ptf||[]).push(o,arguments),n}})()}))})),"complete"==r.readyState?h():(t.addEventListener("DOMContentLoaded",h),t.addEventListener("load",h))}(window,document,navigator,top,window.crossOriginIsolated);</script><script src="https://www.googletagmanager.com/gtag/js?id=G-BD14FTHPNC" type="text/partytown" data-nscript="worker"></script><script src="/_next/static/chunks/webpack-74a7c512fa42fc69.js" defer=""></script><script src="/_next/static/chunks/main-819661c54c38eafc.js" defer=""></script><script src="/_next/static/chunks/pages/_app-7f9dc6693ce04520.js" defer=""></script><script src="/_next/static/chunks/117-cbf0dd2a93fca997.js" defer=""></script><script src="/_next/static/chunks/602-80e933e094e77991.js" defer=""></script><script src="/_next/static/chunks/947-ca6cb45655821eab.js" defer=""></script><script src="/_next/static/chunks/403-8b84e5049c16d49f.js" defer=""></script><script src="/_next/static/chunks/460-cfc8c96502458833.js" defer=""></script><script src="/_next/static/chunks/68-8acf76971c46bf47.js" defer=""></script><script src="/_next/static/chunks/pages/search-695fe919c8d5cb9b.js" defer=""></script><script src="/_next/static/rcP1HS6ompi8ywYpLW-WW/_buildManifest.js" defer=""></script><script src="/_next/static/rcP1HS6ompi8ywYpLW-WW/_ssgManifest.js" defer=""></script><style data-href="https://fonts.googleapis.com/css2?family=Lato:wght@300;400;700&display=swap">@font-face{font-family:'Lato';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6u9w4BMUTPHh7USeww.woff) format('woff')}@font-face{font-family:'Lato';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6uyw4BMUTPHvxo.woff) format('woff')}@font-face{font-family:'Lato';font-style:normal;font-weight:700;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6u9w4BMUTPHh6UVeww.woff) format('woff')}@font-face{font-family:'Lato';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6u9w4BMUTPHh7USSwaPGQ3q5d0N7w.woff2) format('woff2');unicode-range:U+0100-02AF,U+0304,U+0308,U+0329,U+1E00-1E9F,U+1EF2-1EFF,U+2020,U+20A0-20AB,U+20AD-20C0,U+2113,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:'Lato';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6u9w4BMUTPHh7USSwiPGQ3q5d0.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+0304,U+0308,U+0329,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD}@font-face{font-family:'Lato';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6uyw4BMUTPHjxAwXiWtFCfQ7A.woff2) format('woff2');unicode-range:U+0100-02AF,U+0304,U+0308,U+0329,U+1E00-1E9F,U+1EF2-1EFF,U+2020,U+20A0-20AB,U+20AD-20C0,U+2113,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:'Lato';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6uyw4BMUTPHjx4wXiWtFCc.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+0304,U+0308,U+0329,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD}@font-face{font-family:'Lato';font-style:normal;font-weight:700;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6u9w4BMUTPHh6UVSwaPGQ3q5d0N7w.woff2) format('woff2');unicode-range:U+0100-02AF,U+0304,U+0308,U+0329,U+1E00-1E9F,U+1EF2-1EFF,U+2020,U+20A0-20AB,U+20AD-20C0,U+2113,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:'Lato';font-style:normal;font-weight:700;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6u9w4BMUTPHh6UVSwiPGQ3q5d0.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+0304,U+0308,U+0329,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD}</style></head><body><div id="__next"><script id="google-analytics" type="text/partytown"> window.dataLayer = window.dataLayer || []; window.gtag = function gtag(){window.dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-BD14FTHPNC', { page_path: window.location.pathname, }); </script><script type="text/partytown"> const MIXPANEL_CUSTOM_LIB_URL = 'https://www.catalyzex.com/mp-cdn/libs/mixpanel-2-latest.min.js'; (function(f,b){if(!b.__SV){var e,g,i,h;window.mixpanel=b;b._i=[];b.init=function(e,f,c){function g(a,d){var b=d.split(".");2==b.length&&(a=a[b[0]],d=b[1]);a[d]=function(){a.push([d].concat(Array.prototype.slice.call(arguments,0)))}}var a=b;"undefined"!==typeof c?a=b[c]=[]:c="mixpanel";a.people=a.people||[];a.toString=function(a){var d="mixpanel";"mixpanel"!==c&&(d+="."+c);a||(d+=" (stub)");return d};a.people.toString=function(){return a.toString(1)+".people (stub)"};i="disable time_event track track_pageview track_links track_forms track_with_groups add_group set_group remove_group register register_once alias unregister identify name_tag set_config reset opt_in_tracking opt_out_tracking has_opted_in_tracking has_opted_out_tracking clear_opt_in_out_tracking start_batch_senders people.set people.set_once people.unset people.increment people.append people.union people.track_charge people.clear_charges people.delete_user people.remove".split(" "); for(h=0;h<i.length;h++)g(a,i[h]);var j="set set_once union unset remove delete".split(" ");a.get_group=function(){function b(c){d[c]=function(){call2_args=arguments;call2=[c].concat(Array.prototype.slice.call(call2_args,0));a.push([e,call2])}}for(var d={},e=["get_group"].concat(Array.prototype.slice.call(arguments,0)),c=0;c<j.length;c++)b(j[c]);return d};b._i.push([e,f,c])};b.__SV=1.2;e=f.createElement("script");e.type="text/javascript";e.async=!0;e.src="undefined"!==typeof MIXPANEL_CUSTOM_LIB_URL? MIXPANEL_CUSTOM_LIB_URL:"file:"===f.location.protocol&&"//catalyzex.com/mp-cdn/libs/mixpanel-2-latest.min.js".match(/^\/\//)?"https://www.catalyzex.com/mp-cdn/libs/mixpanel-2-latest.min.js":"//catalyzex.com/mp-cdn/libs/mixpanel-2-latest.min.js";g=f.getElementsByTagName("script")[0];g.parentNode.insertBefore(e,g)}})(document,window.mixpanel||[]); mixpanel.init("851392464b60e8cc1948a193642f793b", { api_host: "https://www.catalyzex.com/mp", }) manuallySyncMixpanelId = function(currentMixpanelId) { const inMemoryProps = mixpanel?.persistence?.props if (inMemoryProps) { inMemoryProps['distinct_id'] = currentMixpanelId inMemoryProps['$device_id'] = currentMixpanelId delete inMemoryProps['$user_id'] } } </script><div class="Layout_layout-container__GqQwY"><div><div data-testid="banner-main-container" id="Banner_banner-main-container__DgEOW" class="cx-banner"><span class="Banner_content__a4ws8 Banner_default-content___HRmT">Get our free extension to see links to code for papers anywhere online!</span><span class="Banner_content__a4ws8 Banner_small-content__iQlll">Free add-on: code for papers everywhere!</span><span class="Banner_content__a4ws8 Banner_extra-small-content__qkq9E">Free add-on: See code for papers anywhere!</span><div class="Banner_banner-button-section__kX1fj"><a class="Banner_banner-social-button__b3sZ7 Banner_browser-button__6CbLf" href="https://chrome.google.com/webstore/detail/%F0%9F%92%BB-catalyzex-link-all-aim/aikkeehnlfpamidigaffhfmgbkdeheil" rel="noreferrer" target="_blank"><p><img src="/static/images/google-chrome.svg" alt="Chrome logo"/>Add to <!-- -->Chrome</p></a><a class="Banner_firefox-button__nwnR6 Banner_banner-social-button__b3sZ7 Banner_browser-button__6CbLf" href="https://addons.mozilla.org/en-US/firefox/addon/code-finder-catalyzex" rel="noreferrer" target="_blank"><p><img src="/static/images/firefox.svg" alt="Firefox logo"/>Add to <!-- -->Firefox</p></a><a class="Banner_banner-social-button__b3sZ7 Banner_browser-button__6CbLf" href="https://microsoftedge.microsoft.com/addons/detail/get-papers-with-code-ever/mflbgfojghoglejmalekheopgadjmlkm" rel="noreferrer" target="_blank"><p><img src="/static/images/microsoft-edge.svg" alt="Edge logo"/>Add to <!-- -->Edge</p></a></div><div id="Banner_banner-close-button__68_52" class="banner-close-button" data-testid="banner-close-icon" role="button" tabindex="0" aria-label="Home"><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" height="22" width="22" color="black"><path stroke-linecap="round" stroke-linejoin="round" d="m9.75 9.75 4.5 4.5m0-4.5-4.5 4.5M21 12a9 9 0 1 1-18 0 9 9 0 0 1 18 0Z"></path></svg></div></div></div><section data-hydration-on-demand="true"></section><div data-testid="header-main-container" class="Header_navbar__bVRQt"><nav><div><a class="Header_navbar-brand__9oFe_" href="/"><svg version="1.0" xmlns="http://www.w3.org/2000/svg" width="466.000000pt" height="466.000000pt" viewBox="0 0 466.000000 466.000000" preserveAspectRatio="xMidYMid meet" data-testid="catalyzex-header-icon"><title>CatalyzeX Icon</title><g transform="translate(0.000000,466.000000) scale(0.100000,-0.100000)" fill="#000000" stroke="none"><path d="M405 3686 c-42 -18 -83 -69 -92 -114 -4 -20 -8 -482 -8 -1027 l0 -990 25 -44 c16 -28 39 -52 65 -65 38 -20 57 -21 433 -24 444 -3 487 1 538 52 18 18 37 50 43 71 7 25 11 154 11 343 l0 302 -165 0 -165 0 0 -240 0 -240 -225 0 -225 0 0 855 0 855 225 0 225 0 0 -225 0 -225 166 0 165 0 -3 308 c-3 289 -4 309 -24 342 -11 19 -38 45 -60 57 -39 23 -42 23 -469 22 -335 0 -437 -3 -460 -13z"></path><path d="M1795 3686 c-16 -7 -38 -23 -48 -34 -47 -52 -46 -27 -47 -1262 0 -808 3 -1177 11 -1205 14 -50 63 -102 109 -115 19 -5 142 -10 273 -10 l238 0 -3 148 -3 147 -125 2 c-69 0 -135 1 -147 2 l-23 1 0 1025 0 1025 150 0 150 0 0 145 0 145 -252 0 c-188 -1 -261 -4 -283 -14z"></path><path d="M3690 3555 l0 -145 155 0 155 0 0 -1025 0 -1025 -27 0 c-16 -1 -84 -2 -153 -3 l-125 -2 -3 -148 -3 -148 258 3 c296 3 309 7 351 88 l22 45 -2 1202 c-3 1196 -3 1202 -24 1229 -11 15 -33 37 -48 48 -26 20 -43 21 -292 24 l-264 3 0 -146z"></path><path d="M2520 2883 c0 -5 70 -164 156 -356 l157 -347 -177 -374 c-97 -205 -176 -376 -176 -380 0 -3 77 -5 171 -4 l172 3 90 228 c49 125 93 227 97 227 4 0 47 -103 95 -230 l87 -230 174 0 c96 0 174 2 174 3 0 2 -79 172 -175 377 -96 206 -175 378 -175 382 0 8 303 678 317 701 2 4 -70 7 -161 7 l-164 0 -83 -210 c-45 -115 -85 -210 -89 -210 -4 0 -43 95 -86 210 l-79 210 -162 0 c-90 0 -163 -3 -163 -7z"></path></g></svg></a></div></nav></div><div data-testid="search-page-main-container" class="Search_search-page-main-container__1ayN5"><div class="Searchbar_search-bar-container__xIN4L rounded-border Search_searchbar-component__QG9QV" id="searchbar-component"><form class="Searchbar_search-bar-container__xIN4L" data-testid="search-bar-form"><div><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" height="22"><title>Search Icon</title><path stroke-linecap="round" stroke-linejoin="round" d="m21 21-5.197-5.197m0 0A7.5 7.5 0 1 0 5.196 5.196a7.5 7.5 0 0 0 10.607 10.607Z"></path></svg><input class="form-control Searchbar_search-field__L9Oaa" type="text" id="search-field" name="search" required="" autoComplete="off" placeholder="What are you working on?" value="Generative AI"/><button class="Searchbar_clear-form__WzDSJ" type="button"><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" height="24" width="24"><path stroke-linecap="round" stroke-linejoin="round" d="M6 18 18 6M6 6l12 12"></path></svg></button><button class="Searchbar_filter-icon-container__qAKJN" type="button" title="search by advanced filters like language/framework, computational requirement, dataset, use case, hardware, etc."><div class="Searchbar_pulse1__6sv_E"></div><img alt="Alert button" fetchpriority="high" width="512" height="512" decoding="async" data-nimg="1" class="Searchbar_filter-icon__0rBbt" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Ffilter.cf288982.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Ffilter.cf288982.png&w=1080&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Ffilter.cf288982.png&w=1080&q=75"/></button></div></form></div><div class="Search_search-title-container__QvnOo"><h1><span class="descriptor" style="display:none">Topic:</span><b>Generative AI</b></h1><div class="wrapper Search_alert-wrapper__mJrm4"><button class="AlertButton_alert-btn__pC8cK" title="Get latest alerts for these search results"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div><br/></div><p class="Search_topic-blurb-content__b9CTE"><span class="descriptor" style="display:none">What is Generative AI? </span>Generative AI or generative artificial intelligence refers to a type of AI that can create various types of content including text, audio, music, images, videos, and code. This is powered by large models called foundation models that are trained on massive datasets to perform out-of-the-box tasks including classification, summarization, video and audio comprehension, prediction, Q&A, and more.</p><h3 class="descriptor" style="display:none">Papers and Code</h3><div data-testid="toggle-search-bar" id="Search_toggle-search-bar__PbOLK"><span></span><div style="position:relative;display:inline-block;text-align:left;opacity:1;direction:ltr;border-radius:11px;-webkit-transition:opacity 0.25s;-moz-transition:opacity 0.25s;transition:opacity 0.25s;touch-action:none;-webkit-tap-highlight-color:rgba(0, 0, 0, 0);-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none"><div class="react-switch-bg" style="height:22px;width:45px;margin:0;position:relative;background:#888888;border-radius:11px;cursor:pointer;-webkit-transition:background 0.25s;-moz-transition:background 0.25s;transition:background 0.25s"></div><div class="react-switch-handle" style="height:15px;width:15px;background:#ffffff;display:inline-block;cursor:pointer;border-radius:50%;position:absolute;transform:translateX(3.5px);top:3.5px;outline:0;border:0;-webkit-transition:background-color 0.25s, transform 0.25s, box-shadow 0.15s;-moz-transition:background-color 0.25s, transform 0.25s, box-shadow 0.15s;transition:background-color 0.25s, transform 0.25s, box-shadow 0.15s"></div><input type="checkbox" role="switch" aria-checked="false" style="border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px" aria-label="Search with code"/></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/autonomous-ai-imitators-increase-diversity-in"><strong>Autonomous AI imitators increase diversity in homogeneous information ecosystems</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.16021" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=Autonomous AI imitators increase diversity in homogeneous information ecosystems&paper_url=http://arxiv.org/abs/2503.16021" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 21, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Emil%20Bakkensen%20Johansen">Emil Bakkensen Johansen</a>, </span><span><a data-testid="paper-result-author" href="/author/Oliver%20Baumann">Oliver Baumann</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>Recent breakthroughs in large language models (LLMs) have facilitated autonomous AI agents capable of imitating human-generated content. This technological advancement raises fundamental questions about AI's impact on the diversity and democratic value of information ecosystems. We introduce a large-scale simulation framework to examine AI-based imitation within news, a context crucial for public discourse. By systematically testing two distinct imitation strategies across a range of information environments varying in initial diversity, we demonstrate that AI-generated articles do not uniformly homogenize content. Instead, AI's influence is strongly context-dependent: AI-generated content can introduce valuable diversity in originally homogeneous news environments but diminish diversity in initially heterogeneous contexts. These results illustrate that the initial diversity of an information environment critically shapes AI's impact, challenging assumptions that AI-driven imitation uniformly threatens diversity. Instead, when information is initially homogeneous, AI-driven imitation can expand perspectives, styles, and topics. This is especially important in news contexts, where information diversity fosters richer public debate by exposing citizens to alternative viewpoints, challenging biases, and preventing narrative monopolies, which is essential for a resilient democracy.<br/></p><div class="text-with-links"><span></span><span><em>* <!-- -->35 pages, 10 figures, 4 tables; v2: corrected typographical errors,<!-- --> <!-- --> streamlined language, updated abstract, added supplementary information<!-- -->聽</em><br/></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=64&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/autonomous-ai-imitators-increase-diversity-in">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/the-diagram-is-like-guardrails-structuring"><strong>"The Diagram is like Guardrails": Structuring GenAI-assisted Hypotheses Exploration with an Interactive Shared Representation</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.16791" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title="The Diagram is like Guardrails": Structuring GenAI-assisted Hypotheses Exploration with an Interactive Shared Representation&paper_url=http://arxiv.org/abs/2503.16791" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 21, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Zijian%20Ding">Zijian Ding</a>, </span><span><a data-testid="paper-result-author" href="/author/Michelle%20Brachman">Michelle Brachman</a>, </span><span><a data-testid="paper-result-author" href="/author/Joel%20Chan">Joel Chan</a>, </span><span><a data-testid="paper-result-author" href="/author/Werner%20Geyer">Werner Geyer</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>Data analysis encompasses a spectrum of tasks, from high-level conceptual reasoning to lower-level execution. While AI-powered tools increasingly support execution tasks, there remains a need for intelligent assistance in conceptual tasks. This paper investigates the design of an ordered node-link tree interface augmented with AI-generated information hints and visualizations, as a potential shared representation for hypothesis exploration. Through a design probe (n=22), participants generated diagrams averaging 21.82 hypotheses. Our findings showed that the node-link diagram acts as "guardrails" for hypothesis exploration, facilitating structured workflows, providing comprehensive overviews, and enabling efficient backtracking. The AI-generated information hints, particularly visualizations, aided users in transforming abstract ideas into data-backed concepts while reducing cognitive load. We further discuss how node-link diagrams can support both parallel exploration and iterative refinement in hypothesis formulation, potentially enhancing the breadth and depth of human-AI collaborative data analysis.<br/></p><div class="text-with-links"><span></span><span></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=64&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/the-diagram-is-like-guardrails-structuring">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><section data-hydration-on-demand="true"><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/automating-adjudication-of-cardiovascular"><strong>Automating Adjudication of Cardiovascular Events Using Large Language Models</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.17222" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=Automating Adjudication of Cardiovascular Events Using Large Language Models&paper_url=http://arxiv.org/abs/2503.17222" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 21, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Sonish%20Sivarajkumar">Sonish Sivarajkumar</a>, </span><span><a data-testid="paper-result-author" href="/author/Kimia%20Ameri">Kimia Ameri</a>, </span><span><a data-testid="paper-result-author" href="/author/Chuqin%20Li">Chuqin Li</a>, </span><span><a data-testid="paper-result-author" href="/author/Yanshan%20Wang">Yanshan Wang</a>, </span><span><a data-testid="paper-result-author" href="/author/Min%20Jiang">Min Jiang</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>Cardiovascular events, such as heart attacks and strokes, remain a leading cause of mortality globally, necessitating meticulous monitoring and adjudication in clinical trials. This process, traditionally performed manually by clinical experts, is time-consuming, resource-intensive, and prone to inter-reviewer variability, potentially introducing bias and hindering trial progress. This study addresses these critical limitations by presenting a novel framework for automating the adjudication of cardiovascular events in clinical trials using Large Language Models (LLMs). We developed a two-stage approach: first, employing an LLM-based pipeline for event information extraction from unstructured clinical data and second, using an LLM-based adjudication process guided by a Tree of Thoughts approach and clinical endpoint committee (CEC) guidelines. Using cardiovascular event-specific clinical trial data, the framework achieved an F1-score of 0.82 for event extraction and an accuracy of 0.68 for adjudication. Furthermore, we introduce the CLEART score, a novel, automated metric specifically designed for evaluating the quality of AI-generated clinical reasoning in adjudicating cardiovascular events. This approach demonstrates significant potential for substantially reducing adjudication time and costs while maintaining high-quality, consistent, and auditable outcomes in clinical trials. The reduced variability and enhanced standardization also allow for faster identification and mitigation of risks associated with cardiovascular therapies.<br/></p><div class="text-with-links"><span></span><span></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=64&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/automating-adjudication-of-cardiovascular">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/can-ai-expose-tax-loopholes-towards-a-new"><strong>Can AI expose tax loopholes? Towards a new generation of legal policy assistants</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.17339" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=Can AI expose tax loopholes? Towards a new generation of legal policy assistants&paper_url=http://arxiv.org/abs/2503.17339" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 21, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Peter%20Fratri%C4%8D">Peter Fratri膷</a>, </span><span><a data-testid="paper-result-author" href="/author/Nils%20Holzenberger">Nils Holzenberger</a>, </span><span><a data-testid="paper-result-author" href="/author/David%20Restrepo%20Amariles">David Restrepo Amariles</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>The legislative process is the backbone of a state built on solid institutions. Yet, due to the complexity of laws -- particularly tax law -- policies may lead to inequality and social tensions. In this study, we introduce a novel prototype system designed to address the issues of tax loopholes and tax avoidance. Our hybrid solution integrates a natural language interface with a domain-specific language tailored for planning. We demonstrate on a case study how tax loopholes and avoidance schemes can be exposed. We conclude that our prototype can help enhance social welfare by systematically identifying and addressing tax gaps stemming from loopholes.<br/></p><div class="text-with-links"><span></span><span><em>* <!-- -->13 pages, 6 figures<!-- -->聽</em><br/></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=64&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/can-ai-expose-tax-loopholes-towards-a-new">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/position-interactive-generative-video-as-next"><strong>Position: Interactive Generative Video as Next-Generation Game Engine</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.17359" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=Position: Interactive Generative Video as Next-Generation Game Engine&paper_url=http://arxiv.org/abs/2503.17359" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 21, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Jiwen%20Yu">Jiwen Yu</a>, </span><span><a data-testid="paper-result-author" href="/author/Yiran%20Qin">Yiran Qin</a>, </span><span><a data-testid="paper-result-author" href="/author/Haoxuan%20Che">Haoxuan Che</a>, </span><span><a data-testid="paper-result-author" href="/author/Quande%20Liu">Quande Liu</a>, </span><span><a data-testid="paper-result-author" href="/author/Xintao%20Wang">Xintao Wang</a>, </span><span><a data-testid="paper-result-author" href="/author/Pengfei%20Wan">Pengfei Wan</a>, </span><span><a data-testid="paper-result-author" href="/author/Di%20Zhang">Di Zhang</a>, </span><span><a data-testid="paper-result-author" href="/author/Xihui%20Liu">Xihui Liu</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>Modern game development faces significant challenges in creativity and cost due to predetermined content in traditional game engines. Recent breakthroughs in video generation models, capable of synthesizing realistic and interactive virtual environments, present an opportunity to revolutionize game creation. In this position paper, we propose Interactive Generative Video (IGV) as the foundation for Generative Game Engines (GGE), enabling unlimited novel content generation in next-generation gaming. GGE leverages IGV's unique strengths in unlimited high-quality content synthesis, physics-aware world modeling, user-controlled interactivity, long-term memory capabilities, and causal reasoning. We present a comprehensive framework detailing GGE's core modules and a hierarchical maturity roadmap (L0-L4) to guide its evolution. Our work charts a new course for game development in the AI era, envisioning a future where AI-powered generative systems fundamentally reshape how games are created and experienced.<br/></p><div class="text-with-links"><span></span><span></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=64&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/position-interactive-generative-video-as-next">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/greeniq-a-deep-search-platform-for"><strong>GreenIQ: A Deep Search Platform for Comprehensive Carbon Market Analysis and Automated Report Generation</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.16041" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=GreenIQ: A Deep Search Platform for Comprehensive Carbon Market Analysis and Automated Report Generation&paper_url=http://arxiv.org/abs/2503.16041" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 21, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Oluwole%20Fagbohun">Oluwole Fagbohun</a>, </span><span><a data-testid="paper-result-author" href="/author/Sai%20Yashwanth">Sai Yashwanth</a>, </span><span><a data-testid="paper-result-author" href="/author/Akinyemi%20Sadeeq%20Akintola">Akinyemi Sadeeq Akintola</a>, </span><span><a data-testid="paper-result-author" href="/author/Ifeoluwa%20Wurola">Ifeoluwa Wurola</a>, </span><span><a data-testid="paper-result-author" href="/author/Lanre%20Shittu">Lanre Shittu</a>, </span><span><a data-testid="paper-result-author" href="/author/Aniema%20Inyang">Aniema Inyang</a>, </span><span><a data-testid="paper-result-author" href="/author/Oluwatimilehin%20Odubola">Oluwatimilehin Odubola</a>, </span><span><a data-testid="paper-result-author" href="/author/Udodirim%20Offia">Udodirim Offia</a>, </span><span><a data-testid="paper-result-author" href="/author/Said%20Olanrewaju">Said Olanrewaju</a>, </span><span><a data-testid="paper-result-author" href="/author/Ogidan%20Toluwaleke">Ogidan Toluwaleke</a></span><span>(<a href="/s/Generative%20AI#">+<!-- -->2<!-- --> more</a>)</span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>This study introduces GreenIQ, an AI-powered deep search platform designed to revolutionise carbon market intelligence through autonomous analysis and automated report generation. Carbon markets operate across diverse regulatory landscapes, generating vast amounts of heterogeneous data from policy documents, industry reports, academic literature, and real-time trading platforms. Traditional research approaches remain labour-intensive, slow, and difficult to scale. GreenIQ addresses these limitations through a multi-agent architecture powered by Large Language Models (LLMs), integrating five specialised AI agents: a Main Researcher Agent for intelligent information retrieval, a Report Writing Agent for structured synthesis, a Final Reviewer Agent for accuracy verification, a Data Visualisation Agent for enhanced interpretability, and a Translator Agent for multilingual adaptation. The system achieves seamless integration of structured and unstructured information with AI-driven citation verification, ensuring high transparency and reliability. GreenIQ delivers a 99.2\% reduction in processing time and a 99.7\% cost reduction compared to traditional research methodologies. A novel AI persona-based evaluation framework involving 16 domain-specific AI personas highlights its superior cross-jurisdictional analytical capabilities and regulatory insight generation. GreenIQ sets new standards in AI-driven research synthesis, policy analysis, and sustainability finance by streamlining carbon market research. It offers an efficient and scalable framework for environmental and financial intelligence, enabling more accurate, timely, and cost-effective decision-making in complex regulatory landscapes<br/></p><div class="text-with-links"><span></span><span><em>* <!-- -->12 Pages, 1 figure<!-- -->聽</em><br/></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=64&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/greeniq-a-deep-search-platform-for">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/advancing-problem-based-learning-in"><strong>Advancing Problem-Based Learning in Biomedical Engineering in the Era of Generative AI</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.16558" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=Advancing Problem-Based Learning in Biomedical Engineering in the Era of Generative AI&paper_url=http://arxiv.org/abs/2503.16558" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 20, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Micky%20C.%20Nnamdi">Micky C. Nnamdi</a>, </span><span><a data-testid="paper-result-author" href="/author/J.%20Ben%20Tamo">J. Ben Tamo</a>, </span><span><a data-testid="paper-result-author" href="/author/Wenqi%20Shi">Wenqi Shi</a>, </span><span><a data-testid="paper-result-author" href="/author/May%20D.%20Wang">May D. Wang</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>Problem-Based Learning (PBL) has significantly impacted biomedical engineering (BME) education since its introduction in the early 2000s, effectively enhancing critical thinking and real-world knowledge application among students. With biomedical engineering rapidly converging with artificial intelligence (AI), integrating effective AI education into established curricula has become challenging yet increasingly necessary. Recent advancements, including AI's recognition by the 2024 Nobel Prize, have highlighted the importance of training students comprehensively in biomedical AI. However, effective biomedical AI education faces substantial obstacles, such as diverse student backgrounds, limited personalized mentoring, constrained computational resources, and difficulties in safely scaling hands-on practical experiments due to privacy and ethical concerns associated with biomedical data. To overcome these issues, we conducted a three-year (2021-2023) case study implementing an advanced PBL framework tailored specifically for biomedical AI education, involving 92 undergraduate and 156 graduate students from the joint Biomedical Engineering program of Georgia Institute of Technology and Emory University. Our approach emphasizes collaborative, interdisciplinary problem-solving through authentic biomedical AI challenges. The implementation led to measurable improvements in learning outcomes, evidenced by high research productivity (16 student-authored publications), consistently positive peer evaluations, and successful development of innovative computational methods addressing real biomedical challenges. Additionally, we examined the role of generative AI both as a teaching subject and an educational support tool within the PBL framework. Our study presents a practical and scalable roadmap for biomedical engineering departments aiming to integrate robust AI education into their curricula.<br/></p><div class="text-with-links"><span></span><span></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=64&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/advancing-problem-based-learning-in">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/world-knowledge-from-ai-image-generation-for"><strong>World Knowledge from AI Image Generation for Robot Control</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.16579" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=World Knowledge from AI Image Generation for Robot Control&paper_url=http://arxiv.org/abs/2503.16579" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 20, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Jonas%20Krumme">Jonas Krumme</a>, </span><span><a data-testid="paper-result-author" href="/author/Christoph%20Zetzsche">Christoph Zetzsche</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>When interacting with the world robots face a number of difficult questions, having to make decisions when given under-specified tasks where they need to make choices, often without clearly defined right and wrong answers. Humans, on the other hand, can often rely on their knowledge and experience to fill in the gaps. For example, the simple task of organizing newly bought produce into the fridge involves deciding where to put each thing individually, how to arrange them together meaningfully, e.g. putting related things together, all while there is no clear right and wrong way to accomplish this task. We could encode all this information on how to do such things explicitly into the robots' knowledge base, but this can quickly become overwhelming, considering the number of potential tasks and circumstances the robot could encounter. However, images of the real world often implicitly encode answers to such questions and can show which configurations of objects are meaningful or are usually used by humans. An image of a full fridge can give a lot of information about how things are usually arranged in relation to each other and the full fridge at large. Modern generative systems are capable of generating plausible images of the real world and can be conditioned on the environment in which the robot operates. Here we investigate the idea of using the implicit knowledge about the world of modern generative AI systems given by their ability to generate convincing images of the real world to solve under-specified tasks.<br/></p><div class="text-with-links"><span></span><span><em>* <!-- -->9 pages, 10 figures<!-- -->聽</em><br/></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=64&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/world-knowledge-from-ai-image-generation-for">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/conversational-user-ai-intervention-a-study"><strong>Conversational User-AI Intervention: A Study on Prompt Rewriting for Improved LLM Response Generation</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.16789" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=Conversational User-AI Intervention: A Study on Prompt Rewriting for Improved LLM Response Generation&paper_url=http://arxiv.org/abs/2503.16789" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 21, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Rupak%20Sarkar">Rupak Sarkar</a>, </span><span><a data-testid="paper-result-author" href="/author/Bahareh%20Sarrafzadeh">Bahareh Sarrafzadeh</a>, </span><span><a data-testid="paper-result-author" href="/author/Nirupama%20Chandrasekaran">Nirupama Chandrasekaran</a>, </span><span><a data-testid="paper-result-author" href="/author/Nagu%20Rangan">Nagu Rangan</a>, </span><span><a data-testid="paper-result-author" href="/author/Philip%20Resnik">Philip Resnik</a>, </span><span><a data-testid="paper-result-author" href="/author/Longqi%20Yang">Longqi Yang</a>, </span><span><a data-testid="paper-result-author" href="/author/Sujay%20Kumar%20Jauhar">Sujay Kumar Jauhar</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>Human-LLM conversations are increasingly becoming more pervasive in peoples' professional and personal lives, yet many users still struggle to elicit helpful responses from LLM Chatbots. One of the reasons for this issue is users' lack of understanding in crafting effective prompts that accurately convey their information needs. Meanwhile, the existence of real-world conversational datasets on the one hand, and the text understanding faculties of LLMs on the other, present a unique opportunity to study this problem, and its potential solutions at scale. Thus, in this paper we present the first LLM-centric study of real human-AI chatbot conversations, focused on investigating aspects in which user queries fall short of expressing information needs, and the potential of using LLMs to rewrite suboptimal user prompts. Our findings demonstrate that rephrasing ineffective prompts can elicit better responses from a conversational system, while preserving the user's original intent. Notably, the performance of rewrites improves in longer conversations, where contextual inferences about user needs can be made more accurately. Additionally, we observe that LLMs often need to -- and inherently do -- make \emph{plausible} assumptions about a user's intentions and goals when interpreting prompts. Our findings largely hold true across conversational domains, user intents, and LLMs of varying sizes and families, indicating the promise of using prompt rewriting as a solution for better human-AI interactions.<br/></p><div class="text-with-links"><span></span><span><em>* <!-- -->8 pages, ACL style<!-- -->聽</em><br/></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=64&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/conversational-user-ai-intervention-a-study">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/truthlens-explainable-deepfake-detection-for"><strong>TruthLens: Explainable DeepFake Detection for Face Manipulated and Fully Synthetic Data</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.15867" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&w=96&q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=TruthLens: Explainable DeepFake Detection for Face Manipulated and Fully Synthetic Data&paper_url=http://arxiv.org/abs/2503.15867" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=48&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&w=96&q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&w=828&q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=640&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&w=1080&q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 20, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Rohit%20Kundu">Rohit Kundu</a>, </span><span><a data-testid="paper-result-author" href="/author/Athula%20Balachandran">Athula Balachandran</a>, </span><span><a data-testid="paper-result-author" href="/author/Amit%20K.%20Roy-Chowdhury">Amit K. Roy-Chowdhury</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>Detecting DeepFakes has become a crucial research area as the widespread use of AI image generators enables the effortless creation of face-manipulated and fully synthetic content, yet existing methods are often limited to binary classification (real vs. fake) and lack interpretability. To address these challenges, we propose TruthLens, a novel and highly generalizable framework for DeepFake detection that not only determines whether an image is real or fake but also provides detailed textual reasoning for its predictions. Unlike traditional methods, TruthLens effectively handles both face-manipulated DeepFakes and fully AI-generated content while addressing fine-grained queries such as "Does the eyes/nose/mouth look real or fake?" The architecture of TruthLens combines the global contextual understanding of multimodal large language models like PaliGemma2 with the localized feature extraction capabilities of vision-only models like DINOv2. This hybrid design leverages the complementary strengths of both models, enabling robust detection of subtle manipulations while maintaining interpretability. Extensive experiments on diverse datasets demonstrate that TruthLens outperforms state-of-the-art methods in detection accuracy (by 2-14%) and explainability, in both in-domain and cross-data settings, generalizing effectively across traditional and emerging manipulation techniques.<br/></p><div class="text-with-links"><span></span><span></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=64&q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&w=128&q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/truthlens-explainable-deepfake-detection-for">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div></section><section data-hydration-on-demand="true"></section></div><section data-hydration-on-demand="true"></section></div></div><script id="__NEXT_DATA__" type="application/json">{"props":{"pageProps":{"results":[{"title":"Autonomous AI imitators increase diversity in homogeneous information ecosystems","content":"Recent breakthroughs in large language models (LLMs) have facilitated autonomous AI agents capable of imitating human-generated content. This technological advancement raises fundamental questions about AI's impact on the diversity and democratic value of information ecosystems. We introduce a large-scale simulation framework to examine AI-based imitation within news, a context crucial for public discourse. By systematically testing two distinct imitation strategies across a range of information environments varying in initial diversity, we demonstrate that AI-generated articles do not uniformly homogenize content. Instead, AI's influence is strongly context-dependent: AI-generated content can introduce valuable diversity in originally homogeneous news environments but diminish diversity in initially heterogeneous contexts. These results illustrate that the initial diversity of an information environment critically shapes AI's impact, challenging assumptions that AI-driven imitation uniformly threatens diversity. Instead, when information is initially homogeneous, AI-driven imitation can expand perspectives, styles, and topics. This is especially important in news contexts, where information diversity fosters richer public debate by exposing citizens to alternative viewpoints, challenging biases, and preventing narrative monopolies, which is essential for a resilient democracy.","authors":["Emil Bakkensen Johansen","Oliver Baumann"],"pdf_url":"http://arxiv.org/abs/2503.16021","paper_id":"2503.16021","link":"/paper/autonomous-ai-imitators-increase-diversity-in","publication_date":"Mar 21, 2025","raw_publication_date":"2025-03-21","submission_date":"Mar 21, 2025","images":[],"arxiv_comment":"35 pages, 10 figures, 4 tables; v2: corrected typographical errors,\n streamlined language, updated abstract, added supplementary information","journal_ref":null,"code_available":false,"slug":"autonomous-ai-imitators-increase-diversity-in"},{"title":"\"The Diagram is like Guardrails\": Structuring GenAI-assisted Hypotheses Exploration with an Interactive Shared Representation","content":"Data analysis encompasses a spectrum of tasks, from high-level conceptual reasoning to lower-level execution. While AI-powered tools increasingly support execution tasks, there remains a need for intelligent assistance in conceptual tasks. This paper investigates the design of an ordered node-link tree interface augmented with AI-generated information hints and visualizations, as a potential shared representation for hypothesis exploration. Through a design probe (n=22), participants generated diagrams averaging 21.82 hypotheses. Our findings showed that the node-link diagram acts as \"guardrails\" for hypothesis exploration, facilitating structured workflows, providing comprehensive overviews, and enabling efficient backtracking. The AI-generated information hints, particularly visualizations, aided users in transforming abstract ideas into data-backed concepts while reducing cognitive load. We further discuss how node-link diagrams can support both parallel exploration and iterative refinement in hypothesis formulation, potentially enhancing the breadth and depth of human-AI collaborative data analysis.","authors":["Zijian Ding","Michelle Brachman","Joel Chan","Werner Geyer"],"pdf_url":"http://arxiv.org/abs/2503.16791","paper_id":"2503.16791","link":"/paper/the-diagram-is-like-guardrails-structuring","publication_date":"Mar 21, 2025","raw_publication_date":"2025-03-21","submission_date":"Mar 21, 2025","images":[],"arxiv_comment":null,"journal_ref":null,"code_available":false,"slug":"the-diagram-is-like-guardrails-structuring"},{"title":"Automating Adjudication of Cardiovascular Events Using Large Language Models","content":"Cardiovascular events, such as heart attacks and strokes, remain a leading cause of mortality globally, necessitating meticulous monitoring and adjudication in clinical trials. This process, traditionally performed manually by clinical experts, is time-consuming, resource-intensive, and prone to inter-reviewer variability, potentially introducing bias and hindering trial progress. This study addresses these critical limitations by presenting a novel framework for automating the adjudication of cardiovascular events in clinical trials using Large Language Models (LLMs). We developed a two-stage approach: first, employing an LLM-based pipeline for event information extraction from unstructured clinical data and second, using an LLM-based adjudication process guided by a Tree of Thoughts approach and clinical endpoint committee (CEC) guidelines. Using cardiovascular event-specific clinical trial data, the framework achieved an F1-score of 0.82 for event extraction and an accuracy of 0.68 for adjudication. Furthermore, we introduce the CLEART score, a novel, automated metric specifically designed for evaluating the quality of AI-generated clinical reasoning in adjudicating cardiovascular events. This approach demonstrates significant potential for substantially reducing adjudication time and costs while maintaining high-quality, consistent, and auditable outcomes in clinical trials. The reduced variability and enhanced standardization also allow for faster identification and mitigation of risks associated with cardiovascular therapies.","authors":["Sonish Sivarajkumar","Kimia Ameri","Chuqin Li","Yanshan Wang","Min Jiang"],"pdf_url":"http://arxiv.org/abs/2503.17222","paper_id":"2503.17222","link":"/paper/automating-adjudication-of-cardiovascular","publication_date":"Mar 21, 2025","raw_publication_date":"2025-03-21","submission_date":"Mar 21, 2025","images":[],"arxiv_comment":null,"journal_ref":null,"code_available":false,"slug":"automating-adjudication-of-cardiovascular"},{"title":"Can AI expose tax loopholes? Towards a new generation of legal policy assistants","content":"The legislative process is the backbone of a state built on solid institutions. Yet, due to the complexity of laws -- particularly tax law -- policies may lead to inequality and social tensions. In this study, we introduce a novel prototype system designed to address the issues of tax loopholes and tax avoidance. Our hybrid solution integrates a natural language interface with a domain-specific language tailored for planning. We demonstrate on a case study how tax loopholes and avoidance schemes can be exposed. We conclude that our prototype can help enhance social welfare by systematically identifying and addressing tax gaps stemming from loopholes.","authors":["Peter Fratri膷","Nils Holzenberger","David Restrepo Amariles"],"pdf_url":"http://arxiv.org/abs/2503.17339","paper_id":"2503.17339","link":"/paper/can-ai-expose-tax-loopholes-towards-a-new","publication_date":"Mar 21, 2025","raw_publication_date":"2025-03-21","submission_date":"Mar 21, 2025","images":[],"arxiv_comment":"13 pages, 6 figures","journal_ref":null,"code_available":false,"slug":"can-ai-expose-tax-loopholes-towards-a-new"},{"title":"Position: Interactive Generative Video as Next-Generation Game Engine","content":"Modern game development faces significant challenges in creativity and cost due to predetermined content in traditional game engines. Recent breakthroughs in video generation models, capable of synthesizing realistic and interactive virtual environments, present an opportunity to revolutionize game creation. In this position paper, we propose Interactive Generative Video (IGV) as the foundation for Generative Game Engines (GGE), enabling unlimited novel content generation in next-generation gaming. GGE leverages IGV's unique strengths in unlimited high-quality content synthesis, physics-aware world modeling, user-controlled interactivity, long-term memory capabilities, and causal reasoning. We present a comprehensive framework detailing GGE's core modules and a hierarchical maturity roadmap (L0-L4) to guide its evolution. Our work charts a new course for game development in the AI era, envisioning a future where AI-powered generative systems fundamentally reshape how games are created and experienced.","authors":["Jiwen Yu","Yiran Qin","Haoxuan Che","Quande Liu","Xintao Wang","Pengfei Wan","Di Zhang","Xihui Liu"],"pdf_url":"http://arxiv.org/abs/2503.17359","paper_id":"2503.17359","link":"/paper/position-interactive-generative-video-as-next","publication_date":"Mar 21, 2025","raw_publication_date":"2025-03-21","submission_date":"Mar 21, 2025","images":[],"arxiv_comment":null,"journal_ref":null,"code_available":false,"slug":"position-interactive-generative-video-as-next"},{"title":"GreenIQ: A Deep Search Platform for Comprehensive Carbon Market Analysis and Automated Report Generation","content":"This study introduces GreenIQ, an AI-powered deep search platform designed to revolutionise carbon market intelligence through autonomous analysis and automated report generation. Carbon markets operate across diverse regulatory landscapes, generating vast amounts of heterogeneous data from policy documents, industry reports, academic literature, and real-time trading platforms. Traditional research approaches remain labour-intensive, slow, and difficult to scale. GreenIQ addresses these limitations through a multi-agent architecture powered by Large Language Models (LLMs), integrating five specialised AI agents: a Main Researcher Agent for intelligent information retrieval, a Report Writing Agent for structured synthesis, a Final Reviewer Agent for accuracy verification, a Data Visualisation Agent for enhanced interpretability, and a Translator Agent for multilingual adaptation. The system achieves seamless integration of structured and unstructured information with AI-driven citation verification, ensuring high transparency and reliability. GreenIQ delivers a 99.2\\% reduction in processing time and a 99.7\\% cost reduction compared to traditional research methodologies. A novel AI persona-based evaluation framework involving 16 domain-specific AI personas highlights its superior cross-jurisdictional analytical capabilities and regulatory insight generation. GreenIQ sets new standards in AI-driven research synthesis, policy analysis, and sustainability finance by streamlining carbon market research. It offers an efficient and scalable framework for environmental and financial intelligence, enabling more accurate, timely, and cost-effective decision-making in complex regulatory landscapes","authors":["Oluwole Fagbohun","Sai Yashwanth","Akinyemi Sadeeq Akintola","Ifeoluwa Wurola","Lanre Shittu","Aniema Inyang","Oluwatimilehin Odubola","Udodirim Offia","Said Olanrewaju","Ogidan Toluwaleke","Ilemona Abutu","Taiwo Akinbolaji"],"pdf_url":"http://arxiv.org/abs/2503.16041","paper_id":"2503.16041","link":"/paper/greeniq-a-deep-search-platform-for","publication_date":"Mar 21, 2025","raw_publication_date":"2025-03-21","submission_date":"Mar 21, 2025","images":[],"arxiv_comment":"12 Pages, 1 figure","journal_ref":null,"code_available":false,"slug":"greeniq-a-deep-search-platform-for"},{"title":"Advancing Problem-Based Learning in Biomedical Engineering in the Era of Generative AI","content":"Problem-Based Learning (PBL) has significantly impacted biomedical engineering (BME) education since its introduction in the early 2000s, effectively enhancing critical thinking and real-world knowledge application among students. With biomedical engineering rapidly converging with artificial intelligence (AI), integrating effective AI education into established curricula has become challenging yet increasingly necessary. Recent advancements, including AI's recognition by the 2024 Nobel Prize, have highlighted the importance of training students comprehensively in biomedical AI. However, effective biomedical AI education faces substantial obstacles, such as diverse student backgrounds, limited personalized mentoring, constrained computational resources, and difficulties in safely scaling hands-on practical experiments due to privacy and ethical concerns associated with biomedical data. To overcome these issues, we conducted a three-year (2021-2023) case study implementing an advanced PBL framework tailored specifically for biomedical AI education, involving 92 undergraduate and 156 graduate students from the joint Biomedical Engineering program of Georgia Institute of Technology and Emory University. Our approach emphasizes collaborative, interdisciplinary problem-solving through authentic biomedical AI challenges. The implementation led to measurable improvements in learning outcomes, evidenced by high research productivity (16 student-authored publications), consistently positive peer evaluations, and successful development of innovative computational methods addressing real biomedical challenges. Additionally, we examined the role of generative AI both as a teaching subject and an educational support tool within the PBL framework. Our study presents a practical and scalable roadmap for biomedical engineering departments aiming to integrate robust AI education into their curricula.","authors":["Micky C. Nnamdi","J. Ben Tamo","Wenqi Shi","May D. Wang"],"pdf_url":"http://arxiv.org/abs/2503.16558","paper_id":"2503.16558","link":"/paper/advancing-problem-based-learning-in","publication_date":"Mar 20, 2025","raw_publication_date":"2025-03-20","submission_date":"Mar 20, 2025","images":[],"arxiv_comment":null,"journal_ref":null,"code_available":false,"slug":"advancing-problem-based-learning-in"},{"title":"World Knowledge from AI Image Generation for Robot Control","content":"When interacting with the world robots face a number of difficult questions, having to make decisions when given under-specified tasks where they need to make choices, often without clearly defined right and wrong answers. Humans, on the other hand, can often rely on their knowledge and experience to fill in the gaps. For example, the simple task of organizing newly bought produce into the fridge involves deciding where to put each thing individually, how to arrange them together meaningfully, e.g. putting related things together, all while there is no clear right and wrong way to accomplish this task. We could encode all this information on how to do such things explicitly into the robots' knowledge base, but this can quickly become overwhelming, considering the number of potential tasks and circumstances the robot could encounter. However, images of the real world often implicitly encode answers to such questions and can show which configurations of objects are meaningful or are usually used by humans. An image of a full fridge can give a lot of information about how things are usually arranged in relation to each other and the full fridge at large. Modern generative systems are capable of generating plausible images of the real world and can be conditioned on the environment in which the robot operates. Here we investigate the idea of using the implicit knowledge about the world of modern generative AI systems given by their ability to generate convincing images of the real world to solve under-specified tasks.","authors":["Jonas Krumme","Christoph Zetzsche"],"pdf_url":"http://arxiv.org/abs/2503.16579","paper_id":"2503.16579","link":"/paper/world-knowledge-from-ai-image-generation-for","publication_date":"Mar 20, 2025","raw_publication_date":"2025-03-20","submission_date":"Mar 20, 2025","images":[],"arxiv_comment":"9 pages, 10 figures","journal_ref":null,"code_available":false,"slug":"world-knowledge-from-ai-image-generation-for"},{"title":"Conversational User-AI Intervention: A Study on Prompt Rewriting for Improved LLM Response Generation","content":"Human-LLM conversations are increasingly becoming more pervasive in peoples' professional and personal lives, yet many users still struggle to elicit helpful responses from LLM Chatbots. One of the reasons for this issue is users' lack of understanding in crafting effective prompts that accurately convey their information needs. Meanwhile, the existence of real-world conversational datasets on the one hand, and the text understanding faculties of LLMs on the other, present a unique opportunity to study this problem, and its potential solutions at scale. Thus, in this paper we present the first LLM-centric study of real human-AI chatbot conversations, focused on investigating aspects in which user queries fall short of expressing information needs, and the potential of using LLMs to rewrite suboptimal user prompts. Our findings demonstrate that rephrasing ineffective prompts can elicit better responses from a conversational system, while preserving the user's original intent. Notably, the performance of rewrites improves in longer conversations, where contextual inferences about user needs can be made more accurately. Additionally, we observe that LLMs often need to -- and inherently do -- make \\emph{plausible} assumptions about a user's intentions and goals when interpreting prompts. Our findings largely hold true across conversational domains, user intents, and LLMs of varying sizes and families, indicating the promise of using prompt rewriting as a solution for better human-AI interactions.","authors":["Rupak Sarkar","Bahareh Sarrafzadeh","Nirupama Chandrasekaran","Nagu Rangan","Philip Resnik","Longqi Yang","Sujay Kumar Jauhar"],"pdf_url":"http://arxiv.org/abs/2503.16789","paper_id":"2503.16789","link":"/paper/conversational-user-ai-intervention-a-study","publication_date":"Mar 21, 2025","raw_publication_date":"2025-03-21","submission_date":"Mar 21, 2025","images":[],"arxiv_comment":"8 pages, ACL style","journal_ref":null,"code_available":false,"slug":"conversational-user-ai-intervention-a-study"},{"title":"TruthLens: Explainable DeepFake Detection for Face Manipulated and Fully Synthetic Data","content":"Detecting DeepFakes has become a crucial research area as the widespread use of AI image generators enables the effortless creation of face-manipulated and fully synthetic content, yet existing methods are often limited to binary classification (real vs. fake) and lack interpretability. To address these challenges, we propose TruthLens, a novel and highly generalizable framework for DeepFake detection that not only determines whether an image is real or fake but also provides detailed textual reasoning for its predictions. Unlike traditional methods, TruthLens effectively handles both face-manipulated DeepFakes and fully AI-generated content while addressing fine-grained queries such as \"Does the eyes/nose/mouth look real or fake?\" The architecture of TruthLens combines the global contextual understanding of multimodal large language models like PaliGemma2 with the localized feature extraction capabilities of vision-only models like DINOv2. This hybrid design leverages the complementary strengths of both models, enabling robust detection of subtle manipulations while maintaining interpretability. Extensive experiments on diverse datasets demonstrate that TruthLens outperforms state-of-the-art methods in detection accuracy (by 2-14%) and explainability, in both in-domain and cross-data settings, generalizing effectively across traditional and emerging manipulation techniques.","authors":["Rohit Kundu","Athula Balachandran","Amit K. Roy-Chowdhury"],"pdf_url":"http://arxiv.org/abs/2503.15867","paper_id":"2503.15867","link":"/paper/truthlens-explainable-deepfake-detection-for","publication_date":"Mar 20, 2025","raw_publication_date":"2025-03-20","submission_date":"Mar 20, 2025","images":[],"arxiv_comment":null,"journal_ref":null,"code_available":false,"slug":"truthlens-explainable-deepfake-detection-for"}],"total":200,"topicBlurb":"Generative AI or generative artificial intelligence refers to a type of AI that can create various types of content including text, audio, music, images, videos, and code. This is powered by large models called foundation models that are trained on massive datasets to perform out-of-the-box tasks including classification, summarization, video and audio comprehension, prediction, Q\u0026A, and more.","similarResults":false,"query":"Generative AI","userHasHiddenBanner":false,"isMobile":false,"currentBrowser":"","canonicalUrl":"https://www.catalyzex.com/s/Generative%20AI"},"__N_SSP":true},"page":"/search","query":{"query":"Generative%20AI"},"buildId":"rcP1HS6ompi8ywYpLW-WW","isFallback":false,"isExperimentalCompile":false,"gssp":true,"scriptLoader":[]}</script></body></html>