CINXE.COM

Recommendation

<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><style> html, body, button { font-family: '__Lato_814572', '__Lato_Fallback_814572', Helvetica Neue, Helvetica, Arial, sans-serif !important; } </style><link rel="canonical" href="https://www.catalyzex.com/s/Recommendation"/><style> html, body, button { font-family: '__Lato_814572', '__Lato_Fallback_814572', Helvetica Neue, Helvetica, Arial, sans-serif !important; } </style><meta name="viewport" content="width=device-width, initial-scale=1.0, shrink-to-fit=no"/><title>Recommendation</title><meta name="description" content="Recommendation is the task of providing personalized suggestions to users based on their preferences and behavior. Browse open-source code and papers on Recommendation to catalyze your projects, and easily connect with engineers and experts when you need help."/><meta property="og:title" content="Recommendation"/><meta property="og:description" content="Recommendation is the task of providing personalized suggestions to users based on their preferences and behavior. Browse open-source code and papers on Recommendation to catalyze your projects, and easily connect with engineers and experts when you need help."/><meta name="twitter:title" content="Recommendation"/><meta name="twitter:description" content="Recommendation is the task of providing personalized suggestions to users based on their preferences and behavior. Browse open-source code and papers on Recommendation to catalyze your projects, and easily connect with engineers and experts when you need help."/><meta name="twitter:image" content="https://www.catalyzex.com/favicon.ico"/><meta name="og:image" content="https://www.catalyzex.com/favicon.ico"/><script type="application/ld+json">{"@context":"https://schema.org","@graph":[{"@type":"SearchResultsPage","name":"Recommendation","description":"Recommendation is the task of providing personalized suggestions to users based on their preferences and behavior. Browse open-source code and papers on Recommendation to catalyze your projects, and easily connect with engineers and experts when you need help.","url":"https://www.catalyzex.com/s/Recommendation","mainEntity":[{"@type":"ItemList","name":"Recommendation papers","numberOfItems":200,"itemListElement":[{"@type":"ListItem","position":1,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/rec-r1-bridging-generative-large-language","name":"Rec-R1: Bridging Generative Large Language Models and User-Centric Recommendation Systems via Reinforcement Learning","datePublished":"2025-03-31","author":[{"@type":"Person","name":"Jiacheng Lin","url":"https://www.catalyzex.com/author/Jiacheng Lin"},{"@type":"Person","name":"Tian Wang","url":"https://www.catalyzex.com/author/Tian Wang"},{"@type":"Person","name":"Kun Qian","url":"https://www.catalyzex.com/author/Kun Qian"}]}},{"@type":"ListItem","position":2,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/text2tracks-prompt-based-music-recommendation","name":"Text2Tracks: Prompt-based Music Recommendation via Generative Retrieval","datePublished":"2025-03-31","author":[{"@type":"Person","name":"Enrico Palumbo","url":"https://www.catalyzex.com/author/Enrico Palumbo"},{"@type":"Person","name":"Gustavo Penha","url":"https://www.catalyzex.com/author/Gustavo Penha"},{"@type":"Person","name":"Andreas Damianou","url":"https://www.catalyzex.com/author/Andreas Damianou"},{"@type":"Person","name":"Jos茅 Luis Redondo Garc铆a","url":"https://www.catalyzex.com/author/Jos茅 Luis Redondo Garc铆a"},{"@type":"Person","name":"Timothy Christopher Heath","url":"https://www.catalyzex.com/author/Timothy Christopher Heath"},{"@type":"Person","name":"Alice Wang","url":"https://www.catalyzex.com/author/Alice Wang"},{"@type":"Person","name":"Hugues Bouchard","url":"https://www.catalyzex.com/author/Hugues Bouchard"},{"@type":"Person","name":"Mounia Lalmas","url":"https://www.catalyzex.com/author/Mounia Lalmas"}]}},{"@type":"ListItem","position":3,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/predicting-targeted-therapy-resistance-in-non","name":"Predicting Targeted Therapy Resistance in Non-Small Cell Lung Cancer Using Multimodal Machine Learning","datePublished":"2025-03-31","author":[{"@type":"Person","name":"Peiying Hua","url":"https://www.catalyzex.com/author/Peiying Hua"},{"@type":"Person","name":"Andrea Olofson","url":"https://www.catalyzex.com/author/Andrea Olofson"},{"@type":"Person","name":"Faraz Farhadi","url":"https://www.catalyzex.com/author/Faraz Farhadi"},{"@type":"Person","name":"Liesbeth Hondelink","url":"https://www.catalyzex.com/author/Liesbeth Hondelink"},{"@type":"Person","name":"Gregory Tsongalis","url":"https://www.catalyzex.com/author/Gregory Tsongalis"},{"@type":"Person","name":"Konstantin Dragnev","url":"https://www.catalyzex.com/author/Konstantin Dragnev"},{"@type":"Person","name":"Dagmar Hoegemann Savellano","url":"https://www.catalyzex.com/author/Dagmar Hoegemann Savellano"},{"@type":"Person","name":"Arief Suriawinata","url":"https://www.catalyzex.com/author/Arief Suriawinata"},{"@type":"Person","name":"Laura Tafe","url":"https://www.catalyzex.com/author/Laura Tafe"},{"@type":"Person","name":"Saeed Hassanpour","url":"https://www.catalyzex.com/author/Saeed Hassanpour"}]}},{"@type":"ListItem","position":4,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/crossing-boundaries-leveraging-semantic","name":"Crossing Boundaries: Leveraging Semantic Divergences to Explore Cultural Novelty in Cooking Recipes","datePublished":"2025-03-31","author":[{"@type":"Person","name":"Florian Carichon","url":"https://www.catalyzex.com/author/Florian Carichon"},{"@type":"Person","name":"Romain Rampa","url":"https://www.catalyzex.com/author/Romain Rampa"},{"@type":"Person","name":"Golnoosh Farnadi","url":"https://www.catalyzex.com/author/Golnoosh Farnadi"}]}},{"@type":"ListItem","position":5,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/amb-fhe-adaptive-multi-biometric-fusion-with","name":"AMB-FHE: Adaptive Multi-biometric Fusion with Fully Homomorphic Encryption","datePublished":"2025-03-31","author":[{"@type":"Person","name":"Florian Bayer","url":"https://www.catalyzex.com/author/Florian Bayer"},{"@type":"Person","name":"Christian Rathgeb","url":"https://www.catalyzex.com/author/Christian Rathgeb"}]}},{"@type":"ListItem","position":6,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/get-the-agents-drunk-memory-perturbations-in","name":"Get the Agents Drunk: Memory Perturbations in Autonomous Agent-based Recommender Systems","datePublished":"2025-03-31","author":[{"@type":"Person","name":"Shiyi Yang","url":"https://www.catalyzex.com/author/Shiyi Yang"},{"@type":"Person","name":"Zhibo Hu","url":"https://www.catalyzex.com/author/Zhibo Hu"},{"@type":"Person","name":"Chen Wang","url":"https://www.catalyzex.com/author/Chen Wang"},{"@type":"Person","name":"Tong Yu","url":"https://www.catalyzex.com/author/Tong Yu"},{"@type":"Person","name":"Xiwei Xu","url":"https://www.catalyzex.com/author/Xiwei Xu"},{"@type":"Person","name":"Liming Zhu","url":"https://www.catalyzex.com/author/Liming Zhu"},{"@type":"Person","name":"Lina Yao","url":"https://www.catalyzex.com/author/Lina Yao"}]}},{"@type":"ListItem","position":7,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/finding-interest-needle-in-popularity","name":"Finding Interest Needle in Popularity Haystack: Improving Retrieval by Modeling Item Exposure","datePublished":"2025-03-31","author":[{"@type":"Person","name":"Amit Jaspal","url":"https://www.catalyzex.com/author/Amit Jaspal"},{"@type":"Person","name":"Rahul Agarwal","url":"https://www.catalyzex.com/author/Rahul Agarwal"}]}},{"@type":"ListItem","position":8,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/a-systematic-decade-review-of-trip-route","name":"A Systematic Decade Review of Trip Route Planning with Travel Time Estimation based on User Preferences and Behavior","datePublished":"2025-03-30","author":[{"@type":"Person","name":"Nikil Jayasuriya","url":"https://www.catalyzex.com/author/Nikil Jayasuriya"},{"@type":"Person","name":"Deshan Sumanathilaka","url":"https://www.catalyzex.com/author/Deshan Sumanathilaka"}]}},{"@type":"ListItem","position":9,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/filtering-with-time-frequency-analysis-an","name":"Filtering with Time-frequency Analysis: An Adaptive and Lightweight Model for Sequential Recommender Systems Based on Discrete Wavelet Transform","datePublished":"2025-03-30","author":[{"@type":"Person","name":"Sheng Lu","url":"https://www.catalyzex.com/author/Sheng Lu"},{"@type":"Person","name":"Mingxi Ge","url":"https://www.catalyzex.com/author/Mingxi Ge"},{"@type":"Person","name":"Jiuyi Zhang","url":"https://www.catalyzex.com/author/Jiuyi Zhang"},{"@type":"Person","name":"Wanli Zhu","url":"https://www.catalyzex.com/author/Wanli Zhu"},{"@type":"Person","name":"Guanjin Li","url":"https://www.catalyzex.com/author/Guanjin Li"},{"@type":"Person","name":"Fangming Gu","url":"https://www.catalyzex.com/author/Fangming Gu"}]}},{"@type":"ListItem","position":10,"item":{"@type":"ScholarlyArticle","url":"https://www.catalyzex.com/paper/from-content-creation-to-citation-inflation-a","name":"From Content Creation to Citation Inflation: A GenAI Case Study","datePublished":"2025-03-30","author":[{"@type":"Person","name":"Haitham S. Al-Sinani","url":"https://www.catalyzex.com/author/Haitham S. Al-Sinani"},{"@type":"Person","name":"Chris J. Mitchell","url":"https://www.catalyzex.com/author/Chris J. Mitchell"}]}}]}]}]}</script><link rel="preload" as="image" imageSrcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Ffilter.cf288982.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Ffilter.cf288982.png&amp;w=1080&amp;q=75 2x" fetchpriority="high"/><meta name="next-head-count" content="15"/><link rel="preconnect" href="https://fonts.googleapis.com"/><link rel="preconnect" href="https://fonts.gstatic.com" crossorigin=""/><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><meta name="p:domain_verify" content="7a8c54ff8920a71e909037ac35612f4e"/><meta name="author" content="CatalyzeX"/><meta property="og:type" content="website"/><meta property="og:site_name" content="CatalyzeX"/><meta property="og:url" content="https://www.catalyzex.com/"/><meta property="fb:app_id" content="658945670928778"/><meta property="fb:admins" content="515006233"/><meta name="twitter:card" content="summary_large_image"/><meta name="twitter:domain" content="www.catalyzex.com"/><meta name="twitter:site" content="@catalyzex"/><meta name="twitter:creator" content="@catalyzex"/><script data-partytown-config="true"> partytown = { lib: "/_next/static/~partytown/", forward: [ "gtag", "mixpanel.track", "mixpanel.track_pageview", "mixpanel.identify", "mixpanel.people.set", "mixpanel.reset", "mixpanel.get_distinct_id", "mixpanel.set_config", "manuallySyncMixpanelId" ] }; </script><link rel="preconnect" href="https://fonts.gstatic.com" crossorigin /><link rel="preload" href="/_next/static/media/155cae559bbd1a77-s.p.woff2" as="font" type="font/woff2" crossorigin="anonymous" data-next-font="size-adjust"/><link rel="preload" href="/_next/static/media/4de1fea1a954a5b6-s.p.woff2" as="font" type="font/woff2" crossorigin="anonymous" data-next-font="size-adjust"/><link rel="preload" href="/_next/static/media/6d664cce900333ee-s.p.woff2" as="font" type="font/woff2" crossorigin="anonymous" data-next-font="size-adjust"/><link rel="preload" href="/_next/static/css/21db9cac47ff2c1f.css" as="style"/><link rel="stylesheet" href="/_next/static/css/21db9cac47ff2c1f.css" data-n-g=""/><link rel="preload" href="/_next/static/css/b8053a51356cf568.css" as="style"/><link rel="stylesheet" href="/_next/static/css/b8053a51356cf568.css" data-n-p=""/><noscript data-n-css=""></noscript><script defer="" nomodule="" src="/_next/static/chunks/polyfills-78c92fac7aa8fdd8.js"></script><script data-partytown="">!(function(w,p,f,c){if(!window.crossOriginIsolated && !navigator.serviceWorker) return;c=w[p]=w[p]||{};c[f]=(c[f]||[])})(window,'partytown','forward');/* Partytown 0.10.2 - MIT builder.io */ const t={preserveBehavior:!1},e=e=>{if("string"==typeof e)return[e,t];const[n,r=t]=e;return[n,{...t,...r}]},n=Object.freeze((t=>{const e=new Set;let n=[];do{Object.getOwnPropertyNames(n).forEach((t=>{"function"==typeof n[t]&&e.add(t)}))}while((n=Object.getPrototypeOf(n))!==Object.prototype);return Array.from(e)})());!function(t,r,o,i,a,s,c,d,l,p,u=t,f){function h(){f||(f=1,"/"==(c=(s.lib||"/~partytown/")+(s.debug?"debug/":""))[0]&&(l=r.querySelectorAll('script[type="text/partytown"]'),i!=t?i.dispatchEvent(new CustomEvent("pt1",{detail:t})):(d=setTimeout(v,1e4),r.addEventListener("pt0",w),a?y(1):o.serviceWorker?o.serviceWorker.register(c+(s.swPath||"partytown-sw.js"),{scope:c}).then((function(t){t.active?y():t.installing&&t.installing.addEventListener("statechange",(function(t){"activated"==t.target.state&&y()}))}),console.error):v())))}function y(e){p=r.createElement(e?"script":"iframe"),t._pttab=Date.now(),e||(p.style.display="block",p.style.width="0",p.style.height="0",p.style.border="0",p.style.visibility="hidden",p.setAttribute("aria-hidden",!0)),p.src=c+"partytown-"+(e?"atomics.js?v=0.10.2":"sandbox-sw.html?"+t._pttab),r.querySelector(s.sandboxParent||"body").appendChild(p)}function v(n,o){for(w(),i==t&&(s.forward||[]).map((function(n){const[r]=e(n);delete t[r.split(".")[0]]})),n=0;n<l.length;n++)(o=r.createElement("script")).innerHTML=l[n].innerHTML,o.nonce=s.nonce,r.head.appendChild(o);p&&p.parentNode.removeChild(p)}function w(){clearTimeout(d)}s=t.partytown||{},i==t&&(s.forward||[]).map((function(r){const[o,{preserveBehavior:i}]=e(r);u=t,o.split(".").map((function(e,r,o){var a;u=u[o[r]]=r+1<o.length?u[o[r]]||(a=o[r+1],n.includes(a)?[]:{}):(()=>{let e=null;if(i){const{methodOrProperty:n,thisObject:r}=((t,e)=>{let n=t;for(let t=0;t<e.length-1;t+=1)n=n[e[t]];return{thisObject:n,methodOrProperty:e.length>0?n[e[e.length-1]]:void 0}})(t,o);"function"==typeof n&&(e=(...t)=>n.apply(r,...t))}return function(){let n;return e&&(n=e(arguments)),(t._ptf=t._ptf||[]).push(o,arguments),n}})()}))})),"complete"==r.readyState?h():(t.addEventListener("DOMContentLoaded",h),t.addEventListener("load",h))}(window,document,navigator,top,window.crossOriginIsolated);</script><script src="https://www.googletagmanager.com/gtag/js?id=G-BD14FTHPNC" type="text/partytown" data-nscript="worker"></script><script src="/_next/static/chunks/webpack-74a7c512fa42fc69.js" defer=""></script><script src="/_next/static/chunks/main-819661c54c38eafc.js" defer=""></script><script src="/_next/static/chunks/pages/_app-7f9dc6693ce04520.js" defer=""></script><script src="/_next/static/chunks/117-cbf0dd2a93fca997.js" defer=""></script><script src="/_next/static/chunks/602-80e933e094e77991.js" defer=""></script><script src="/_next/static/chunks/947-ca6cb45655821eab.js" defer=""></script><script src="/_next/static/chunks/403-8b84e5049c16d49f.js" defer=""></script><script src="/_next/static/chunks/460-cfc8c96502458833.js" defer=""></script><script src="/_next/static/chunks/68-8acf76971c46bf47.js" defer=""></script><script src="/_next/static/chunks/pages/search-695fe919c8d5cb9b.js" defer=""></script><script src="/_next/static/rcP1HS6ompi8ywYpLW-WW/_buildManifest.js" defer=""></script><script src="/_next/static/rcP1HS6ompi8ywYpLW-WW/_ssgManifest.js" defer=""></script><style data-href="https://fonts.googleapis.com/css2?family=Lato:wght@300;400;700&display=swap">@font-face{font-family:'Lato';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6u9w4BMUTPHh7USeww.woff) format('woff')}@font-face{font-family:'Lato';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6uyw4BMUTPHvxo.woff) format('woff')}@font-face{font-family:'Lato';font-style:normal;font-weight:700;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6u9w4BMUTPHh6UVeww.woff) format('woff')}@font-face{font-family:'Lato';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6u9w4BMUTPHh7USSwaPGQ3q5d0N7w.woff2) format('woff2');unicode-range:U+0100-02AF,U+0304,U+0308,U+0329,U+1E00-1E9F,U+1EF2-1EFF,U+2020,U+20A0-20AB,U+20AD-20C0,U+2113,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:'Lato';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6u9w4BMUTPHh7USSwiPGQ3q5d0.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+0304,U+0308,U+0329,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD}@font-face{font-family:'Lato';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6uyw4BMUTPHjxAwXiWtFCfQ7A.woff2) format('woff2');unicode-range:U+0100-02AF,U+0304,U+0308,U+0329,U+1E00-1E9F,U+1EF2-1EFF,U+2020,U+20A0-20AB,U+20AD-20C0,U+2113,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:'Lato';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6uyw4BMUTPHjx4wXiWtFCc.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+0304,U+0308,U+0329,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD}@font-face{font-family:'Lato';font-style:normal;font-weight:700;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6u9w4BMUTPHh6UVSwaPGQ3q5d0N7w.woff2) format('woff2');unicode-range:U+0100-02AF,U+0304,U+0308,U+0329,U+1E00-1E9F,U+1EF2-1EFF,U+2020,U+20A0-20AB,U+20AD-20C0,U+2113,U+2C60-2C7F,U+A720-A7FF}@font-face{font-family:'Lato';font-style:normal;font-weight:700;font-display:swap;src:url(https://fonts.gstatic.com/s/lato/v24/S6u9w4BMUTPHh6UVSwiPGQ3q5d0.woff2) format('woff2');unicode-range:U+0000-00FF,U+0131,U+0152-0153,U+02BB-02BC,U+02C6,U+02DA,U+02DC,U+0304,U+0308,U+0329,U+2000-206F,U+2074,U+20AC,U+2122,U+2191,U+2193,U+2212,U+2215,U+FEFF,U+FFFD}</style></head><body><div id="__next"><script id="google-analytics" type="text/partytown"> window.dataLayer = window.dataLayer || []; window.gtag = function gtag(){window.dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-BD14FTHPNC', { page_path: window.location.pathname, }); </script><script type="text/partytown"> const MIXPANEL_CUSTOM_LIB_URL = 'https://www.catalyzex.com/mp-cdn/libs/mixpanel-2-latest.min.js'; (function(f,b){if(!b.__SV){var e,g,i,h;window.mixpanel=b;b._i=[];b.init=function(e,f,c){function g(a,d){var b=d.split(".");2==b.length&&(a=a[b[0]],d=b[1]);a[d]=function(){a.push([d].concat(Array.prototype.slice.call(arguments,0)))}}var a=b;"undefined"!==typeof c?a=b[c]=[]:c="mixpanel";a.people=a.people||[];a.toString=function(a){var d="mixpanel";"mixpanel"!==c&&(d+="."+c);a||(d+=" (stub)");return d};a.people.toString=function(){return a.toString(1)+".people (stub)"};i="disable time_event track track_pageview track_links track_forms track_with_groups add_group set_group remove_group register register_once alias unregister identify name_tag set_config reset opt_in_tracking opt_out_tracking has_opted_in_tracking has_opted_out_tracking clear_opt_in_out_tracking start_batch_senders people.set people.set_once people.unset people.increment people.append people.union people.track_charge people.clear_charges people.delete_user people.remove".split(" "); for(h=0;h<i.length;h++)g(a,i[h]);var j="set set_once union unset remove delete".split(" ");a.get_group=function(){function b(c){d[c]=function(){call2_args=arguments;call2=[c].concat(Array.prototype.slice.call(call2_args,0));a.push([e,call2])}}for(var d={},e=["get_group"].concat(Array.prototype.slice.call(arguments,0)),c=0;c<j.length;c++)b(j[c]);return d};b._i.push([e,f,c])};b.__SV=1.2;e=f.createElement("script");e.type="text/javascript";e.async=!0;e.src="undefined"!==typeof MIXPANEL_CUSTOM_LIB_URL? MIXPANEL_CUSTOM_LIB_URL:"file:"===f.location.protocol&&"//catalyzex.com/mp-cdn/libs/mixpanel-2-latest.min.js".match(/^\/\//)?"https://www.catalyzex.com/mp-cdn/libs/mixpanel-2-latest.min.js":"//catalyzex.com/mp-cdn/libs/mixpanel-2-latest.min.js";g=f.getElementsByTagName("script")[0];g.parentNode.insertBefore(e,g)}})(document,window.mixpanel||[]); mixpanel.init("851392464b60e8cc1948a193642f793b", { api_host: "https://www.catalyzex.com/mp", }) manuallySyncMixpanelId = function(currentMixpanelId) { const inMemoryProps = mixpanel?.persistence?.props if (inMemoryProps) { inMemoryProps['distinct_id'] = currentMixpanelId inMemoryProps['$device_id'] = currentMixpanelId delete inMemoryProps['$user_id'] } } </script><div class="Layout_layout-container__GqQwY"><div><div data-testid="banner-main-container" id="Banner_banner-main-container__DgEOW" class="cx-banner"><span class="Banner_content__a4ws8 Banner_default-content___HRmT">Get our free extension to see links to code for papers anywhere online!</span><span class="Banner_content__a4ws8 Banner_small-content__iQlll">Free add-on: code for papers everywhere!</span><span class="Banner_content__a4ws8 Banner_extra-small-content__qkq9E">Free add-on: See code for papers anywhere!</span><div class="Banner_banner-button-section__kX1fj"><a class="Banner_banner-social-button__b3sZ7 Banner_browser-button__6CbLf" href="https://chrome.google.com/webstore/detail/%F0%9F%92%BB-catalyzex-link-all-aim/aikkeehnlfpamidigaffhfmgbkdeheil" rel="noreferrer" target="_blank"><p><img src="/static/images/google-chrome.svg" alt="Chrome logo"/>Add to <!-- -->Chrome</p></a><a class="Banner_firefox-button__nwnR6 Banner_banner-social-button__b3sZ7 Banner_browser-button__6CbLf" href="https://addons.mozilla.org/en-US/firefox/addon/code-finder-catalyzex" rel="noreferrer" target="_blank"><p><img src="/static/images/firefox.svg" alt="Firefox logo"/>Add to <!-- -->Firefox</p></a><a class="Banner_banner-social-button__b3sZ7 Banner_browser-button__6CbLf" href="https://microsoftedge.microsoft.com/addons/detail/get-papers-with-code-ever/mflbgfojghoglejmalekheopgadjmlkm" rel="noreferrer" target="_blank"><p><img src="/static/images/microsoft-edge.svg" alt="Edge logo"/>Add to <!-- -->Edge</p></a></div><div id="Banner_banner-close-button__68_52" class="banner-close-button" data-testid="banner-close-icon" role="button" tabindex="0" aria-label="Home"><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" height="22" width="22" color="black"><path stroke-linecap="round" stroke-linejoin="round" d="m9.75 9.75 4.5 4.5m0-4.5-4.5 4.5M21 12a9 9 0 1 1-18 0 9 9 0 0 1 18 0Z"></path></svg></div></div></div><section data-hydration-on-demand="true"></section><div data-testid="header-main-container" class="Header_navbar__bVRQt"><nav><div><a class="Header_navbar-brand__9oFe_" href="/"><svg version="1.0" xmlns="http://www.w3.org/2000/svg" width="466.000000pt" height="466.000000pt" viewBox="0 0 466.000000 466.000000" preserveAspectRatio="xMidYMid meet" data-testid="catalyzex-header-icon"><title>CatalyzeX Icon</title><g transform="translate(0.000000,466.000000) scale(0.100000,-0.100000)" fill="#000000" stroke="none"><path d="M405 3686 c-42 -18 -83 -69 -92 -114 -4 -20 -8 -482 -8 -1027 l0 -990 25 -44 c16 -28 39 -52 65 -65 38 -20 57 -21 433 -24 444 -3 487 1 538 52 18 18 37 50 43 71 7 25 11 154 11 343 l0 302 -165 0 -165 0 0 -240 0 -240 -225 0 -225 0 0 855 0 855 225 0 225 0 0 -225 0 -225 166 0 165 0 -3 308 c-3 289 -4 309 -24 342 -11 19 -38 45 -60 57 -39 23 -42 23 -469 22 -335 0 -437 -3 -460 -13z"></path><path d="M1795 3686 c-16 -7 -38 -23 -48 -34 -47 -52 -46 -27 -47 -1262 0 -808 3 -1177 11 -1205 14 -50 63 -102 109 -115 19 -5 142 -10 273 -10 l238 0 -3 148 -3 147 -125 2 c-69 0 -135 1 -147 2 l-23 1 0 1025 0 1025 150 0 150 0 0 145 0 145 -252 0 c-188 -1 -261 -4 -283 -14z"></path><path d="M3690 3555 l0 -145 155 0 155 0 0 -1025 0 -1025 -27 0 c-16 -1 -84 -2 -153 -3 l-125 -2 -3 -148 -3 -148 258 3 c296 3 309 7 351 88 l22 45 -2 1202 c-3 1196 -3 1202 -24 1229 -11 15 -33 37 -48 48 -26 20 -43 21 -292 24 l-264 3 0 -146z"></path><path d="M2520 2883 c0 -5 70 -164 156 -356 l157 -347 -177 -374 c-97 -205 -176 -376 -176 -380 0 -3 77 -5 171 -4 l172 3 90 228 c49 125 93 227 97 227 4 0 47 -103 95 -230 l87 -230 174 0 c96 0 174 2 174 3 0 2 -79 172 -175 377 -96 206 -175 378 -175 382 0 8 303 678 317 701 2 4 -70 7 -161 7 l-164 0 -83 -210 c-45 -115 -85 -210 -89 -210 -4 0 -43 95 -86 210 l-79 210 -162 0 c-90 0 -163 -3 -163 -7z"></path></g></svg></a></div></nav></div><div data-testid="search-page-main-container" class="Search_search-page-main-container__1ayN5"><div class="Searchbar_search-bar-container__xIN4L rounded-border Search_searchbar-component__QG9QV" id="searchbar-component"><form class="Searchbar_search-bar-container__xIN4L" data-testid="search-bar-form"><div><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" height="22"><title>Search Icon</title><path stroke-linecap="round" stroke-linejoin="round" d="m21 21-5.197-5.197m0 0A7.5 7.5 0 1 0 5.196 5.196a7.5 7.5 0 0 0 10.607 10.607Z"></path></svg><input class="form-control Searchbar_search-field__L9Oaa" type="text" id="search-field" name="search" required="" autoComplete="off" placeholder="What are you working on?" value="Recommendation"/><button class="Searchbar_clear-form__WzDSJ" type="button"><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" height="24" width="24"><path stroke-linecap="round" stroke-linejoin="round" d="M6 18 18 6M6 6l12 12"></path></svg></button><button class="Searchbar_filter-icon-container__qAKJN" type="button" title="search by advanced filters like language/framework, computational requirement, dataset, use case, hardware, etc."><div class="Searchbar_pulse1__6sv_E"></div><img alt="Alert button" fetchpriority="high" width="512" height="512" decoding="async" data-nimg="1" class="Searchbar_filter-icon__0rBbt" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Ffilter.cf288982.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Ffilter.cf288982.png&amp;w=1080&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Ffilter.cf288982.png&amp;w=1080&amp;q=75"/></button></div></form></div><div class="Search_search-title-container__QvnOo"><h1><span class="descriptor" style="display:none">Topic:</span><b>Recommendation</b></h1><div class="wrapper Search_alert-wrapper__mJrm4"><button class="AlertButton_alert-btn__pC8cK" title="Get latest alerts for these search results"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div><br/></div><p class="Search_topic-blurb-content__b9CTE"><span class="descriptor" style="display:none">What is Recommendation? </span>Recommendation is the task of providing personalized suggestions to users based on their preferences and behavior.</p><h3 class="descriptor" style="display:none">Papers and Code</h3><div data-testid="toggle-search-bar" id="Search_toggle-search-bar__PbOLK"><span></span><div style="position:relative;display:inline-block;text-align:left;opacity:1;direction:ltr;border-radius:11px;-webkit-transition:opacity 0.25s;-moz-transition:opacity 0.25s;transition:opacity 0.25s;touch-action:none;-webkit-tap-highlight-color:rgba(0, 0, 0, 0);-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none"><div class="react-switch-bg" style="height:22px;width:45px;margin:0;position:relative;background:#888888;border-radius:11px;cursor:pointer;-webkit-transition:background 0.25s;-moz-transition:background 0.25s;transition:background 0.25s"></div><div class="react-switch-handle" style="height:15px;width:15px;background:#ffffff;display:inline-block;cursor:pointer;border-radius:50%;position:absolute;transform:translateX(3.5px);top:3.5px;outline:0;border:0;-webkit-transition:background-color 0.25s, transform 0.25s, box-shadow 0.15s;-moz-transition:background-color 0.25s, transform 0.25s, box-shadow 0.15s;transition:background-color 0.25s, transform 0.25s, box-shadow 0.15s"></div><input type="checkbox" role="switch" aria-checked="false" style="border:0;clip:rect(0 0 0 0);height:1px;margin:-1px;overflow:hidden;padding:0;position:absolute;width:1px" aria-label="Search with code"/></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/rec-r1-bridging-generative-large-language"><strong>Rec-R1: Bridging Generative Large Language Models and User-Centric Recommendation Systems via Reinforcement Learning</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.24289" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=Rec-R1: Bridging Generative Large Language Models and User-Centric Recommendation Systems via Reinforcement Learning&amp;paper_url=http://arxiv.org/abs/2503.24289" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 31, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Jiacheng%20Lin">Jiacheng Lin</a>, </span><span><a data-testid="paper-result-author" href="/author/Tian%20Wang">Tian Wang</a>, </span><span><a data-testid="paper-result-author" href="/author/Kun%20Qian">Kun Qian</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>We propose Rec-R1, a general reinforcement learning framework that bridges large language models (LLMs) with recommendation systems through closed-loop optimization. Unlike prompting and supervised fine-tuning (SFT), Rec-R1 directly optimizes LLM generation using feedback from a fixed black-box recommendation model, without relying on synthetic SFT data from proprietary models such as GPT-4o. This avoids the substantial cost and effort required for data distillation. To verify the effectiveness of Rec-R1, we evaluate it on two representative tasks: product search and sequential recommendation. Experimental results demonstrate that Rec-R1 not only consistently outperforms prompting- and SFT-based methods, but also achieves significant gains over strong discriminative baselines, even when used with simple retrievers such as BM25. Moreover, Rec-R1 preserves the general-purpose capabilities of the LLM, unlike SFT, which often impairs instruction-following and reasoning. These findings suggest Rec-R1 as a promising foundation for continual task-specific adaptation without catastrophic forgetting.<br/></p><div class="text-with-links"><span></span><span></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=64&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/rec-r1-bridging-generative-large-language">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/text2tracks-prompt-based-music-recommendation"><strong>Text2Tracks: Prompt-based Music Recommendation via Generative Retrieval</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.24193" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=Text2Tracks: Prompt-based Music Recommendation via Generative Retrieval&amp;paper_url=http://arxiv.org/abs/2503.24193" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 31, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Enrico%20Palumbo">Enrico Palumbo</a>, </span><span><a data-testid="paper-result-author" href="/author/Gustavo%20Penha">Gustavo Penha</a>, </span><span><a data-testid="paper-result-author" href="/author/Andreas%20Damianou">Andreas Damianou</a>, </span><span><a data-testid="paper-result-author" href="/author/Jos%C3%A9%20Luis%20Redondo%20Garc%C3%ADa">Jos茅 Luis Redondo Garc铆a</a>, </span><span><a data-testid="paper-result-author" href="/author/Timothy%20Christopher%20Heath">Timothy Christopher Heath</a>, </span><span><a data-testid="paper-result-author" href="/author/Alice%20Wang">Alice Wang</a>, </span><span><a data-testid="paper-result-author" href="/author/Hugues%20Bouchard">Hugues Bouchard</a>, </span><span><a data-testid="paper-result-author" href="/author/Mounia%20Lalmas">Mounia Lalmas</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>In recent years, Large Language Models (LLMs) have enabled users to provide highly specific music recommendation requests using natural language prompts (e.g. &quot;Can you recommend some old classics for slow dancing?&quot;). In this setup, the recommended tracks are predicted by the LLM in an autoregressive way, i.e. the LLM generates the track titles one token at a time. While intuitive, this approach has several limitation. First, it is based on a general purpose tokenization that is optimized for words rather than for track titles. Second, it necessitates an additional entity resolution layer that matches the track title to the actual track identifier. Third, the number of decoding steps scales linearly with the length of the track title, slowing down inference. In this paper, we propose to address the task of prompt-based music recommendation as a generative retrieval task. Within this setting, we introduce novel, effective, and efficient representations of track identifiers that significantly outperform commonly used strategies. We introduce Text2Tracks, a generative retrieval model that learns a mapping from a user&#x27;s music recommendation prompt to the relevant track IDs directly. Through an offline evaluation on a dataset of playlists with language inputs, we find that (1) the strategy to create IDs for music tracks is the most important factor for the effectiveness of Text2Tracks and semantic IDs significantly outperform commonly used strategies that rely on song titles as identifiers (2) provided with the right choice of track identifiers, Text2Tracks outperforms sparse and dense retrieval solutions trained to retrieve tracks from language prompts.<br/></p><div class="text-with-links"><span></span><span></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=64&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/text2tracks-prompt-based-music-recommendation">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><section data-hydration-on-demand="true"><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/predicting-targeted-therapy-resistance-in-non"><strong>Predicting Targeted Therapy Resistance in Non-Small Cell Lung Cancer Using Multimodal Machine Learning</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.24165" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=Predicting Targeted Therapy Resistance in Non-Small Cell Lung Cancer Using Multimodal Machine Learning&amp;paper_url=http://arxiv.org/abs/2503.24165" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 31, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Peiying%20Hua">Peiying Hua</a>, </span><span><a data-testid="paper-result-author" href="/author/Andrea%20Olofson">Andrea Olofson</a>, </span><span><a data-testid="paper-result-author" href="/author/Faraz%20Farhadi">Faraz Farhadi</a>, </span><span><a data-testid="paper-result-author" href="/author/Liesbeth%20Hondelink">Liesbeth Hondelink</a>, </span><span><a data-testid="paper-result-author" href="/author/Gregory%20Tsongalis">Gregory Tsongalis</a>, </span><span><a data-testid="paper-result-author" href="/author/Konstantin%20Dragnev">Konstantin Dragnev</a>, </span><span><a data-testid="paper-result-author" href="/author/Dagmar%20Hoegemann%20Savellano">Dagmar Hoegemann Savellano</a>, </span><span><a data-testid="paper-result-author" href="/author/Arief%20Suriawinata">Arief Suriawinata</a>, </span><span><a data-testid="paper-result-author" href="/author/Laura%20Tafe">Laura Tafe</a>, </span><span><a data-testid="paper-result-author" href="/author/Saeed%20Hassanpour">Saeed Hassanpour</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>Lung cancer is the primary cause of cancer death globally, with non-small cell lung cancer (NSCLC) emerging as its most prevalent subtype. Among NSCLC patients, approximately 32.3% have mutations in the epidermal growth factor receptor (EGFR) gene. Osimertinib, a third-generation EGFR-tyrosine kinase inhibitor (TKI), has demonstrated remarkable efficacy in the treatment of NSCLC patients with activating and T790M resistance EGFR mutations. Despite its established efficacy, drug resistance poses a significant challenge for patients to fully benefit from osimertinib. The absence of a standard tool to accurately predict TKI resistance, including that of osimertinib, remains a critical obstacle. To bridge this gap, in this study, we developed an interpretable multimodal machine learning model designed to predict patient resistance to osimertinib among late-stage NSCLC patients with activating EGFR mutations, achieving a c-index of 0.82 on a multi-institutional dataset. This machine learning model harnesses readily available data routinely collected during patient visits and medical assessments to facilitate precision lung cancer management and informed treatment decisions. By integrating various data types such as histology images, next generation sequencing (NGS) data, demographics data, and clinical records, our multimodal model can generate well-informed recommendations. Our experiment results also demonstrated the superior performance of the multimodal model over single modality models (c-index 0.82 compared with 0.75 and 0.77), thus underscoring the benefit of combining multiple modalities in patient outcome prediction.<br/></p><div class="text-with-links"><span></span><span></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=64&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/predicting-targeted-therapy-resistance-in-non">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/crossing-boundaries-leveraging-semantic"><strong>Crossing Boundaries: Leveraging Semantic Divergences to Explore Cultural Novelty in Cooking Recipes</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.24027" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=Crossing Boundaries: Leveraging Semantic Divergences to Explore Cultural Novelty in Cooking Recipes&amp;paper_url=http://arxiv.org/abs/2503.24027" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 31, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Florian%20Carichon">Florian Carichon</a>, </span><span><a data-testid="paper-result-author" href="/author/Romain%20Rampa">Romain Rampa</a>, </span><span><a data-testid="paper-result-author" href="/author/Golnoosh%20Farnadi">Golnoosh Farnadi</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>Novelty modeling and detection is a core topic in Natural Language Processing (NLP), central to numerous tasks such as recommender systems and automatic summarization. It involves identifying pieces of text that deviate in some way from previously known information. However, novelty is also a crucial determinant of the unique perception of relevance and quality of an experience, as it rests upon each individual&#x27;s understanding of the world. Social factors, particularly cultural background, profoundly influence perceptions of novelty and innovation. Cultural novelty arises from differences in salience and novelty as shaped by the distance between distinct communities. While cultural diversity has garnered increasing attention in artificial intelligence (AI), the lack of robust metrics for quantifying cultural novelty hinders a deeper understanding of these divergences. This gap limits quantifying and understanding cultural differences within computational frameworks. To address this, we propose an interdisciplinary framework that integrates knowledge from sociology and management. Central to our approach is GlobalFusion, a novel dataset comprising 500 dishes and approximately 100,000 cooking recipes capturing cultural adaptation from over 150 countries. By introducing a set of Jensen-Shannon Divergence metrics for novelty, we leverage this dataset to analyze textual divergences when recipes from one community are modified by another with a different cultural background. The results reveal significant correlations between our cultural novelty metrics and established cultural measures based on linguistic, religious, and geographical distances. Our findings highlight the potential of our framework to advance the understanding and measurement of cultural diversity in AI.<br/></p><div class="text-with-links"><span></span><span></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=64&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/crossing-boundaries-leveraging-semantic">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/amb-fhe-adaptive-multi-biometric-fusion-with"><strong>AMB-FHE: Adaptive Multi-biometric Fusion with Fully Homomorphic Encryption</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.23949" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=AMB-FHE: Adaptive Multi-biometric Fusion with Fully Homomorphic Encryption&amp;paper_url=http://arxiv.org/abs/2503.23949" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 31, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Florian%20Bayer">Florian Bayer</a>, </span><span><a data-testid="paper-result-author" href="/author/Christian%20Rathgeb">Christian Rathgeb</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>Biometric systems strive to balance security and usability. The use of multi-biometric systems combining multiple biometric modalities is usually recommended for high-security applications. However, the presentation of multiple biometric modalities can impair the user-friendliness of the overall system and might not be necessary in all cases. In this work, we present a simple but flexible approach to increase the privacy protection of homomorphically encrypted multi-biometric reference templates while enabling adaptation to security requirements at run-time: An adaptive multi-biometric fusion with fully homomorphic encryption (AMB-FHE). AMB-FHE is benchmarked against a bimodal biometric database consisting of the CASIA iris and MCYT fingerprint datasets using deep neural networks for feature extraction. Our contribution is easy to implement and increases the flexibility of biometric authentication while offering increased privacy protection through joint encryption of templates from multiple modalities.<br/></p><div class="text-with-links"><span></span><span></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=64&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/amb-fhe-adaptive-multi-biometric-fusion-with">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/get-the-agents-drunk-memory-perturbations-in"><strong>Get the Agents Drunk: Memory Perturbations in Autonomous Agent-based Recommender Systems</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.23804" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=Get the Agents Drunk: Memory Perturbations in Autonomous Agent-based Recommender Systems&amp;paper_url=http://arxiv.org/abs/2503.23804" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 31, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Shiyi%20Yang">Shiyi Yang</a>, </span><span><a data-testid="paper-result-author" href="/author/Zhibo%20Hu">Zhibo Hu</a>, </span><span><a data-testid="paper-result-author" href="/author/Chen%20Wang">Chen Wang</a>, </span><span><a data-testid="paper-result-author" href="/author/Tong%20Yu">Tong Yu</a>, </span><span><a data-testid="paper-result-author" href="/author/Xiwei%20Xu">Xiwei Xu</a>, </span><span><a data-testid="paper-result-author" href="/author/Liming%20Zhu">Liming Zhu</a>, </span><span><a data-testid="paper-result-author" href="/author/Lina%20Yao">Lina Yao</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>Large language model-based agents are increasingly used in recommender systems (Agent4RSs) to achieve personalized behavior modeling. Specifically, Agent4RSs introduces memory mechanisms that enable the agents to autonomously learn and self-evolve from real-world interactions. However, to the best of our knowledge, how robust Agent4RSs are remains unexplored. As such, in this paper, we propose the first work to attack Agent4RSs by perturbing agents&#x27; memories, not only to uncover their limitations but also to enhance their security and robustness, ensuring the development of safer and more reliable AI agents. Given the security and privacy concerns, it is more practical to launch attacks under a black-box setting, where the accurate knowledge of the victim models cannot be easily obtained. Moreover, the practical attacks are often stealthy to maximize the impact. To this end, we propose a novel practical attack framework named DrunkAgent. DrunkAgent consists of a generation module, a strategy module, and a surrogate module. The generation module aims to produce effective and coherent adversarial textual triggers, which can be used to achieve attack objectives such as promoting the target items. The strategy module is designed to `get the target agents drunk&#x27; so that their memories cannot be effectively updated during the interaction process. As such, the triggers can play the best role. Both of the modules are optimized on the surrogate module to improve the transferability and imperceptibility of the attacks. By identifying and analyzing the vulnerabilities, our work provides critical insights that pave the way for building safer and more resilient Agent4RSs. Extensive experiments across various real-world datasets demonstrate the effectiveness of DrunkAgent.<br/></p><div class="text-with-links"><span></span><span></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=64&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/get-the-agents-drunk-memory-perturbations-in">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/finding-interest-needle-in-popularity"><strong>Finding Interest Needle in Popularity Haystack: Improving Retrieval by Modeling Item Exposure</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.23630" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=Finding Interest Needle in Popularity Haystack: Improving Retrieval by Modeling Item Exposure&amp;paper_url=http://arxiv.org/abs/2503.23630" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 31, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Amit%20Jaspal">Amit Jaspal</a>, </span><span><a data-testid="paper-result-author" href="/author/Rahul%20Agarwal">Rahul Agarwal</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>Recommender systems operate in closed feedback loops, where user interactions reinforce popularity bias, leading to over-recommendation of already popular items while under-exposing niche or novel content. Existing bias mitigation methods, such as Inverse Propensity Scoring (IPS) and Off- Policy Correction (OPC), primarily operate at the ranking stage or during training, lacking explicit real-time control over exposure dynamics. In this work, we introduce an exposure- aware retrieval scoring approach, which explicitly models item exposure probability and adjusts retrieval-stage ranking at inference time. Unlike prior work, this method decouples exposure effects from engagement likelihood, enabling controlled trade-offs between fairness and engagement in large-scale recommendation platforms. We validate our approach through online A/B experiments in a real-world video recommendation system, demonstrating a 25% increase in uniquely retrieved items and a 40% reduction in the dominance of over-popular content, all while maintaining overall user engagement levels. Our results establish a scalable, deployable solution for mitigating popularity bias at the retrieval stage, offering a new paradigm for bias-aware personalization.<br/></p><div class="text-with-links"><span></span><span><em>* <!-- -->2 pages<!-- -->聽</em><br/></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=64&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/finding-interest-needle-in-popularity">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/a-systematic-decade-review-of-trip-route"><strong>A Systematic Decade Review of Trip Route Planning with Travel Time Estimation based on User Preferences and Behavior</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.23486" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=A Systematic Decade Review of Trip Route Planning with Travel Time Estimation based on User Preferences and Behavior&amp;paper_url=http://arxiv.org/abs/2503.23486" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 30, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Nikil%20Jayasuriya">Nikil Jayasuriya</a>, </span><span><a data-testid="paper-result-author" href="/author/Deshan%20Sumanathilaka">Deshan Sumanathilaka</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>This paper systematically explores the advancements in adaptive trip route planning and travel time estimation (TTE) through Artificial Intelligence (AI). With the increasing complexity of urban transportation systems, traditional navigation methods often struggle to accommodate dynamic user preferences, real-time traffic conditions, and scalability requirements. This study explores the contributions of established AI techniques, including Machine Learning (ML), Reinforcement Learning (RL), and Graph Neural Networks (GNNs), alongside emerging methodologies like Meta-Learning, Explainable AI (XAI), Generative AI, and Federated Learning. In addition to highlighting these innovations, the paper identifies critical challenges such as ethical concerns, computational scalability, and effective data integration, which must be addressed to advance the field. The paper concludes with recommendations for leveraging AI to build efficient, transparent, and sustainable navigation systems.<br/></p><div class="text-with-links"><span></span><span><em>* <!-- -->6 pages, 2 figures, 1 table<!-- -->聽</em><br/></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=64&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/a-systematic-decade-review-of-trip-route">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/filtering-with-time-frequency-analysis-an"><strong>Filtering with Time-frequency Analysis: An Adaptive and Lightweight Model for Sequential Recommender Systems Based on Discrete Wavelet Transform</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.23436" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=Filtering with Time-frequency Analysis: An Adaptive and Lightweight Model for Sequential Recommender Systems Based on Discrete Wavelet Transform&amp;paper_url=http://arxiv.org/abs/2503.23436" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 30, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Sheng%20Lu">Sheng Lu</a>, </span><span><a data-testid="paper-result-author" href="/author/Mingxi%20Ge">Mingxi Ge</a>, </span><span><a data-testid="paper-result-author" href="/author/Jiuyi%20Zhang">Jiuyi Zhang</a>, </span><span><a data-testid="paper-result-author" href="/author/Wanli%20Zhu">Wanli Zhu</a>, </span><span><a data-testid="paper-result-author" href="/author/Guanjin%20Li">Guanjin Li</a>, </span><span><a data-testid="paper-result-author" href="/author/Fangming%20Gu">Fangming Gu</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>Sequential Recommender Systems (SRS) aim to model sequential behaviors of users to capture their interests which usually evolve over time. Transformer-based SRS have achieved distinguished successes recently. However, studies reveal self-attention mechanism in Transformer-based models is essentially a low-pass filter and ignores high frequency information potentially including meaningful user interest patterns. This motivates us to seek better filtering technologies for SRS, and finally we find Discrete Wavelet Transform (DWT), a famous time-frequency analysis technique from digital signal processing field, can effectively process both low-frequency and high-frequency information. We design an adaptive time-frequency filter with DWT technique, which decomposes user interests into multiple signals with different frequency and time, and can automatically learn weights of these signals. Furthermore, we develop DWTRec, a model for sequential recommendation all based on the adaptive time-frequency filter. Thanks to fast DWT technique, DWTRec has a lower time complexity and space complexity theoretically, and is Proficient in modeling long sequences. Experiments show that our model outperforms state-of-the-art baseline models in datasets with different domains, sparsity levels and average sequence lengths. Especially, our model shows great performance increase in contrast with previous models when the sequence grows longer, which demonstrates another advantage of our model.<br/></p><div class="text-with-links"><span></span><span></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=64&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/filtering-with-time-frequency-analysis-an">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div><div><section data-testid="paper-details-container" class="Search_paper-details-container__Dou2Q"><h2 class="Search_paper-heading__bq58c"><a data-testid="paper-result-title" href="/paper/from-content-creation-to-citation-inflation-a"><strong>From Content Creation to Citation Inflation: A GenAI Case Study</strong></a></h2><div class="Search_buttons-container__WWw_l"><a href="#" target="_blank" id="request-code-2503.23414" data-testid="view-code-button" class="Search_view-code-link__xOgGF"><button type="button" class="btn Search_view-button__D5D2K Search_buttons-spacing__iB2NS Search_black-button__O7oac Search_view-code-button__8Dk6Z"><svg role="img" height="14" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" fill="#fff"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg>Request Code</button></a><button type="button" class="Search_buttons-spacing__iB2NS Search_related-code-btn__F5B3X" data-testid="related-code-button"><span class="descriptor" style="display:none">Code for Similar Papers:</span><img alt="Code for Similar Papers" title="View code for similar papers" loading="lazy" width="37" height="35" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Frelated_icon_transparent.98f57b13.png&amp;w=96&amp;q=75"/></button><a class="Search_buttons-spacing__iB2NS Search_add-code-button__GKwQr" target="_blank" href="/add_code?title=From Content Creation to Citation Inflation: A GenAI Case Study&amp;paper_url=http://arxiv.org/abs/2503.23414" rel="nofollow"><img alt="Add code" title="Contribute your code for this paper to the community" loading="lazy" width="36" height="36" decoding="async" data-nimg="1" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=48&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Faddcode_white.6afb879f.png&amp;w=96&amp;q=75"/></a><div class="wrapper Search_buttons-spacing__iB2NS BookmarkButton_bookmark-wrapper__xJaOg"><button title="Bookmark this paper"><img alt="Bookmark button" id="bookmark-btn" loading="lazy" width="388" height="512" decoding="async" data-nimg="1" class="BookmarkButton_bookmark-btn-image__gkInJ" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fbookmark_outline.3a3e1c2c.png&amp;w=828&amp;q=75"/></button></div><div class="wrapper Search_buttons-spacing__iB2NS"><button class="AlertButton_alert-btn__pC8cK" title="Get alerts when new code is available for this paper"><img alt="Alert button" id="alert_btn" loading="lazy" width="512" height="512" decoding="async" data-nimg="1" class="alert-btn-image " style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=640&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Falert_light_mode_icon.b8fca154.png&amp;w=1080&amp;q=75"/></button><svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 106 34" style="margin-left:9px"><g class="sparkles"><path style="animation:sparkle 2s 0s infinite ease-in-out" d="M15.5740361 -10.33344622s1.1875777-6.20179466 2.24320232 0c0 0 5.9378885 1.05562462 0 2.11124925 0 0-1.05562463 6.33374774-2.24320233 0-3.5627331-.6597654-3.29882695-1.31953078 0-2.11124925z"></path><path style="animation:sparkle 1.5s 0.9s infinite ease-in-out" d="M33.5173993 75.97263826s1.03464615-5.40315215 1.95433162 0c0 0 5.17323078.91968547 0 1.83937095 0 0-.91968547 5.51811283-1.95433162 0-3.10393847-.57480342-2.8740171-1.14960684 0-1.83937095z"></path><path style="animation:sparkle 1.7s 0.4s infinite ease-in-out" d="M69.03038108 1.71240809s.73779281-3.852918 1.39360864 0c0 0 3.68896404.65581583 0 1.31163166 0 0-.65581583 3.93489497-1.39360864 0-2.21337842-.4098849-2.04942447-.81976979 0-1.31163166z"></path></g></svg></div></div><span class="Search_publication-date__mLvO2">Mar 30, 2025<br/></span><div class="AuthorLinks_authors-container__fAwXT"><span class="descriptor" style="display:none">Authors:</span><span><a data-testid="paper-result-author" href="/author/Haitham%20S.%20Al-Sinani">Haitham S. Al-Sinani</a>, </span><span><a data-testid="paper-result-author" href="/author/Chris%20J.%20Mitchell">Chris J. Mitchell</a></span></div><div class="Search_paper-detail-page-images-container__FPeuN"></div><p class="Search_paper-content__1CSu5 text-with-links"><span class="descriptor" style="display:none">Abstract:</span>This paper investigates the presence and impact of questionable, AI-generated academic papers on widely used preprint repositories, with a focus on their role in citation manipulation. Motivated by suspicious patterns observed in publications related to our ongoing research on GenAI-enhanced cybersecurity, we identify clusters of questionable papers and profiles. These papers frequently exhibit minimal technical content, repetitive structure, unverifiable authorship, and mutually reinforcing citation patterns among a recurring set of authors. To assess the feasibility and implications of such practices, we conduct a controlled experiment: generating a fake paper using GenAI, embedding citations to suspected questionable publications, and uploading it to one such repository (ResearchGate). Our findings demonstrate that such papers can bypass platform checks, remain publicly accessible, and contribute to inflating citation metrics like the H-index and i10-index. We present a detailed analysis of the mechanisms involved, highlight systemic weaknesses in content moderation, and offer recommendations for improving platform accountability and preserving academic integrity in the age of GenAI.<br/></p><div class="text-with-links"><span></span><span><em>* <!-- -->20 pages<!-- -->聽</em><br/></span></div><div class="Search_search-result-provider__uWcak">Via<img alt="arxiv icon" loading="lazy" width="56" height="25" decoding="async" data-nimg="1" class="Search_arxiv-icon__SXHe4" style="color:transparent" srcSet="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=64&amp;q=75 1x, /_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75 2x" src="/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Farxiv.41e50dc5.png&amp;w=128&amp;q=75"/></div><div class="Search_paper-link__nVhf_"><svg role="img" height="20" width="24" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg" style="margin-right:5px"><title>Github Icon</title><path d="M12 .297c-6.63 0-12 5.373-12 12 0 5.303 3.438 9.8 8.205 11.385.6.113.82-.258.82-.577 0-.285-.01-1.04-.015-2.04-3.338.724-4.042-1.61-4.042-1.61C4.422 18.07 3.633 17.7 3.633 17.7c-1.087-.744.084-.729.084-.729 1.205.084 1.838 1.236 1.838 1.236 1.07 1.835 2.809 1.305 3.495.998.108-.776.417-1.305.76-1.605-2.665-.3-5.466-1.332-5.466-5.93 0-1.31.465-2.38 1.235-3.22-.135-.303-.54-1.523.105-3.176 0 0 1.005-.322 3.3 1.23.96-.267 1.98-.399 3-.405 1.02.006 2.04.138 3 .405 2.28-1.552 3.285-1.23 3.285-1.23.645 1.653.24 2.873.12 3.176.765.84 1.23 1.91 1.23 3.22 0 4.61-2.805 5.625-5.475 5.92.42.36.81 1.096.81 2.22 0 1.606-.015 2.896-.015 3.286 0 .315.21.69.825.57C20.565 22.092 24 17.592 24 12.297c0-6.627-5.373-12-12-12"></path></svg><svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke-width="1.5" stroke="currentColor" aria-hidden="true" data-slot="icon" width="22" style="margin-right:10px;margin-top:2px"><path stroke-linecap="round" stroke-linejoin="round" d="M12 6.042A8.967 8.967 0 0 0 6 3.75c-1.052 0-2.062.18-3 .512v14.25A8.987 8.987 0 0 1 6 18c2.305 0 4.408.867 6 2.292m0-14.25a8.966 8.966 0 0 1 6-2.292c1.052 0 2.062.18 3 .512v14.25A8.987 8.987 0 0 0 18 18a8.967 8.967 0 0 0-6 2.292m0-14.25v14.25"></path></svg><a data-testid="paper-result-access-link" href="/paper/from-content-creation-to-citation-inflation-a">Access Paper or Ask Questions</a></div></section><div class="Search_seperator-line__4FidS"></div></div></section><section data-hydration-on-demand="true"></section></div><section data-hydration-on-demand="true"></section></div></div><script id="__NEXT_DATA__" type="application/json">{"props":{"pageProps":{"results":[{"title":"Rec-R1: Bridging Generative Large Language Models and User-Centric Recommendation Systems via Reinforcement Learning","content":"We propose Rec-R1, a general reinforcement learning framework that bridges large language models (LLMs) with recommendation systems through closed-loop optimization. Unlike prompting and supervised fine-tuning (SFT), Rec-R1 directly optimizes LLM generation using feedback from a fixed black-box recommendation model, without relying on synthetic SFT data from proprietary models such as GPT-4o. This avoids the substantial cost and effort required for data distillation. To verify the effectiveness of Rec-R1, we evaluate it on two representative tasks: product search and sequential recommendation. Experimental results demonstrate that Rec-R1 not only consistently outperforms prompting- and SFT-based methods, but also achieves significant gains over strong discriminative baselines, even when used with simple retrievers such as BM25. Moreover, Rec-R1 preserves the general-purpose capabilities of the LLM, unlike SFT, which often impairs instruction-following and reasoning. These findings suggest Rec-R1 as a promising foundation for continual task-specific adaptation without catastrophic forgetting.","authors":["Jiacheng Lin","Tian Wang","Kun Qian"],"pdf_url":"http://arxiv.org/abs/2503.24289","paper_id":"2503.24289","link":"/paper/rec-r1-bridging-generative-large-language","publication_date":"Mar 31, 2025","raw_publication_date":"2025-03-31","submission_date":"Mar 31, 2025","images":[],"arxiv_comment":null,"journal_ref":null,"code_available":false,"slug":"rec-r1-bridging-generative-large-language"},{"title":"Text2Tracks: Prompt-based Music Recommendation via Generative Retrieval","content":"In recent years, Large Language Models (LLMs) have enabled users to provide highly specific music recommendation requests using natural language prompts (e.g. \"Can you recommend some old classics for slow dancing?\"). In this setup, the recommended tracks are predicted by the LLM in an autoregressive way, i.e. the LLM generates the track titles one token at a time. While intuitive, this approach has several limitation. First, it is based on a general purpose tokenization that is optimized for words rather than for track titles. Second, it necessitates an additional entity resolution layer that matches the track title to the actual track identifier. Third, the number of decoding steps scales linearly with the length of the track title, slowing down inference. In this paper, we propose to address the task of prompt-based music recommendation as a generative retrieval task. Within this setting, we introduce novel, effective, and efficient representations of track identifiers that significantly outperform commonly used strategies. We introduce Text2Tracks, a generative retrieval model that learns a mapping from a user's music recommendation prompt to the relevant track IDs directly. Through an offline evaluation on a dataset of playlists with language inputs, we find that (1) the strategy to create IDs for music tracks is the most important factor for the effectiveness of Text2Tracks and semantic IDs significantly outperform commonly used strategies that rely on song titles as identifiers (2) provided with the right choice of track identifiers, Text2Tracks outperforms sparse and dense retrieval solutions trained to retrieve tracks from language prompts.","authors":["Enrico Palumbo","Gustavo Penha","Andreas Damianou","Jos茅 Luis Redondo Garc铆a","Timothy Christopher Heath","Alice Wang","Hugues Bouchard","Mounia Lalmas"],"pdf_url":"http://arxiv.org/abs/2503.24193","paper_id":"2503.24193","link":"/paper/text2tracks-prompt-based-music-recommendation","publication_date":"Mar 31, 2025","raw_publication_date":"2025-03-31","submission_date":"Mar 31, 2025","images":[],"arxiv_comment":null,"journal_ref":null,"code_available":false,"slug":"text2tracks-prompt-based-music-recommendation"},{"title":"Predicting Targeted Therapy Resistance in Non-Small Cell Lung Cancer Using Multimodal Machine Learning","content":"Lung cancer is the primary cause of cancer death globally, with non-small cell lung cancer (NSCLC) emerging as its most prevalent subtype. Among NSCLC patients, approximately 32.3% have mutations in the epidermal growth factor receptor (EGFR) gene. Osimertinib, a third-generation EGFR-tyrosine kinase inhibitor (TKI), has demonstrated remarkable efficacy in the treatment of NSCLC patients with activating and T790M resistance EGFR mutations. Despite its established efficacy, drug resistance poses a significant challenge for patients to fully benefit from osimertinib. The absence of a standard tool to accurately predict TKI resistance, including that of osimertinib, remains a critical obstacle. To bridge this gap, in this study, we developed an interpretable multimodal machine learning model designed to predict patient resistance to osimertinib among late-stage NSCLC patients with activating EGFR mutations, achieving a c-index of 0.82 on a multi-institutional dataset. This machine learning model harnesses readily available data routinely collected during patient visits and medical assessments to facilitate precision lung cancer management and informed treatment decisions. By integrating various data types such as histology images, next generation sequencing (NGS) data, demographics data, and clinical records, our multimodal model can generate well-informed recommendations. Our experiment results also demonstrated the superior performance of the multimodal model over single modality models (c-index 0.82 compared with 0.75 and 0.77), thus underscoring the benefit of combining multiple modalities in patient outcome prediction.","authors":["Peiying Hua","Andrea Olofson","Faraz Farhadi","Liesbeth Hondelink","Gregory Tsongalis","Konstantin Dragnev","Dagmar Hoegemann Savellano","Arief Suriawinata","Laura Tafe","Saeed Hassanpour"],"pdf_url":"http://arxiv.org/abs/2503.24165","paper_id":"2503.24165","link":"/paper/predicting-targeted-therapy-resistance-in-non","publication_date":"Mar 31, 2025","raw_publication_date":"2025-03-31","submission_date":"Mar 31, 2025","images":[],"arxiv_comment":null,"journal_ref":null,"code_available":false,"slug":"predicting-targeted-therapy-resistance-in-non"},{"title":"Crossing Boundaries: Leveraging Semantic Divergences to Explore Cultural Novelty in Cooking Recipes","content":"Novelty modeling and detection is a core topic in Natural Language Processing (NLP), central to numerous tasks such as recommender systems and automatic summarization. It involves identifying pieces of text that deviate in some way from previously known information. However, novelty is also a crucial determinant of the unique perception of relevance and quality of an experience, as it rests upon each individual's understanding of the world. Social factors, particularly cultural background, profoundly influence perceptions of novelty and innovation. Cultural novelty arises from differences in salience and novelty as shaped by the distance between distinct communities. While cultural diversity has garnered increasing attention in artificial intelligence (AI), the lack of robust metrics for quantifying cultural novelty hinders a deeper understanding of these divergences. This gap limits quantifying and understanding cultural differences within computational frameworks. To address this, we propose an interdisciplinary framework that integrates knowledge from sociology and management. Central to our approach is GlobalFusion, a novel dataset comprising 500 dishes and approximately 100,000 cooking recipes capturing cultural adaptation from over 150 countries. By introducing a set of Jensen-Shannon Divergence metrics for novelty, we leverage this dataset to analyze textual divergences when recipes from one community are modified by another with a different cultural background. The results reveal significant correlations between our cultural novelty metrics and established cultural measures based on linguistic, religious, and geographical distances. Our findings highlight the potential of our framework to advance the understanding and measurement of cultural diversity in AI.","authors":["Florian Carichon","Romain Rampa","Golnoosh Farnadi"],"pdf_url":"http://arxiv.org/abs/2503.24027","paper_id":"2503.24027","link":"/paper/crossing-boundaries-leveraging-semantic","publication_date":"Mar 31, 2025","raw_publication_date":"2025-03-31","submission_date":"Mar 31, 2025","images":[],"arxiv_comment":null,"journal_ref":null,"code_available":false,"slug":"crossing-boundaries-leveraging-semantic"},{"title":"AMB-FHE: Adaptive Multi-biometric Fusion with Fully Homomorphic Encryption","content":"Biometric systems strive to balance security and usability. The use of multi-biometric systems combining multiple biometric modalities is usually recommended for high-security applications. However, the presentation of multiple biometric modalities can impair the user-friendliness of the overall system and might not be necessary in all cases. In this work, we present a simple but flexible approach to increase the privacy protection of homomorphically encrypted multi-biometric reference templates while enabling adaptation to security requirements at run-time: An adaptive multi-biometric fusion with fully homomorphic encryption (AMB-FHE). AMB-FHE is benchmarked against a bimodal biometric database consisting of the CASIA iris and MCYT fingerprint datasets using deep neural networks for feature extraction. Our contribution is easy to implement and increases the flexibility of biometric authentication while offering increased privacy protection through joint encryption of templates from multiple modalities.","authors":["Florian Bayer","Christian Rathgeb"],"pdf_url":"http://arxiv.org/abs/2503.23949","paper_id":"2503.23949","link":"/paper/amb-fhe-adaptive-multi-biometric-fusion-with","publication_date":"Mar 31, 2025","raw_publication_date":"2025-03-31","submission_date":"Mar 31, 2025","images":[],"arxiv_comment":null,"journal_ref":null,"code_available":false,"slug":"amb-fhe-adaptive-multi-biometric-fusion-with"},{"title":"Get the Agents Drunk: Memory Perturbations in Autonomous Agent-based Recommender Systems","content":"Large language model-based agents are increasingly used in recommender systems (Agent4RSs) to achieve personalized behavior modeling. Specifically, Agent4RSs introduces memory mechanisms that enable the agents to autonomously learn and self-evolve from real-world interactions. However, to the best of our knowledge, how robust Agent4RSs are remains unexplored. As such, in this paper, we propose the first work to attack Agent4RSs by perturbing agents' memories, not only to uncover their limitations but also to enhance their security and robustness, ensuring the development of safer and more reliable AI agents. Given the security and privacy concerns, it is more practical to launch attacks under a black-box setting, where the accurate knowledge of the victim models cannot be easily obtained. Moreover, the practical attacks are often stealthy to maximize the impact. To this end, we propose a novel practical attack framework named DrunkAgent. DrunkAgent consists of a generation module, a strategy module, and a surrogate module. The generation module aims to produce effective and coherent adversarial textual triggers, which can be used to achieve attack objectives such as promoting the target items. The strategy module is designed to `get the target agents drunk' so that their memories cannot be effectively updated during the interaction process. As such, the triggers can play the best role. Both of the modules are optimized on the surrogate module to improve the transferability and imperceptibility of the attacks. By identifying and analyzing the vulnerabilities, our work provides critical insights that pave the way for building safer and more resilient Agent4RSs. Extensive experiments across various real-world datasets demonstrate the effectiveness of DrunkAgent.","authors":["Shiyi Yang","Zhibo Hu","Chen Wang","Tong Yu","Xiwei Xu","Liming Zhu","Lina Yao"],"pdf_url":"http://arxiv.org/abs/2503.23804","paper_id":"2503.23804","link":"/paper/get-the-agents-drunk-memory-perturbations-in","publication_date":"Mar 31, 2025","raw_publication_date":"2025-03-31","submission_date":"Mar 31, 2025","images":[],"arxiv_comment":null,"journal_ref":null,"code_available":false,"slug":"get-the-agents-drunk-memory-perturbations-in"},{"title":"Finding Interest Needle in Popularity Haystack: Improving Retrieval by Modeling Item Exposure","content":"Recommender systems operate in closed feedback loops, where user interactions reinforce popularity bias, leading to over-recommendation of already popular items while under-exposing niche or novel content. Existing bias mitigation methods, such as Inverse Propensity Scoring (IPS) and Off- Policy Correction (OPC), primarily operate at the ranking stage or during training, lacking explicit real-time control over exposure dynamics. In this work, we introduce an exposure- aware retrieval scoring approach, which explicitly models item exposure probability and adjusts retrieval-stage ranking at inference time. Unlike prior work, this method decouples exposure effects from engagement likelihood, enabling controlled trade-offs between fairness and engagement in large-scale recommendation platforms. We validate our approach through online A/B experiments in a real-world video recommendation system, demonstrating a 25% increase in uniquely retrieved items and a 40% reduction in the dominance of over-popular content, all while maintaining overall user engagement levels. Our results establish a scalable, deployable solution for mitigating popularity bias at the retrieval stage, offering a new paradigm for bias-aware personalization.","authors":["Amit Jaspal","Rahul Agarwal"],"pdf_url":"http://arxiv.org/abs/2503.23630","paper_id":"2503.23630","link":"/paper/finding-interest-needle-in-popularity","publication_date":"Mar 31, 2025","raw_publication_date":"2025-03-31","submission_date":"Mar 31, 2025","images":[],"arxiv_comment":"2 pages","journal_ref":null,"code_available":false,"slug":"finding-interest-needle-in-popularity"},{"title":"A Systematic Decade Review of Trip Route Planning with Travel Time Estimation based on User Preferences and Behavior","content":"This paper systematically explores the advancements in adaptive trip route planning and travel time estimation (TTE) through Artificial Intelligence (AI). With the increasing complexity of urban transportation systems, traditional navigation methods often struggle to accommodate dynamic user preferences, real-time traffic conditions, and scalability requirements. This study explores the contributions of established AI techniques, including Machine Learning (ML), Reinforcement Learning (RL), and Graph Neural Networks (GNNs), alongside emerging methodologies like Meta-Learning, Explainable AI (XAI), Generative AI, and Federated Learning. In addition to highlighting these innovations, the paper identifies critical challenges such as ethical concerns, computational scalability, and effective data integration, which must be addressed to advance the field. The paper concludes with recommendations for leveraging AI to build efficient, transparent, and sustainable navigation systems.","authors":["Nikil Jayasuriya","Deshan Sumanathilaka"],"pdf_url":"http://arxiv.org/abs/2503.23486","paper_id":"2503.23486","link":"/paper/a-systematic-decade-review-of-trip-route","publication_date":"Mar 30, 2025","raw_publication_date":"2025-03-30","submission_date":"Mar 30, 2025","images":[],"arxiv_comment":"6 pages, 2 figures, 1 table","journal_ref":null,"code_available":false,"slug":"a-systematic-decade-review-of-trip-route"},{"title":"Filtering with Time-frequency Analysis: An Adaptive and Lightweight Model for Sequential Recommender Systems Based on Discrete Wavelet Transform","content":"Sequential Recommender Systems (SRS) aim to model sequential behaviors of users to capture their interests which usually evolve over time. Transformer-based SRS have achieved distinguished successes recently. However, studies reveal self-attention mechanism in Transformer-based models is essentially a low-pass filter and ignores high frequency information potentially including meaningful user interest patterns. This motivates us to seek better filtering technologies for SRS, and finally we find Discrete Wavelet Transform (DWT), a famous time-frequency analysis technique from digital signal processing field, can effectively process both low-frequency and high-frequency information. We design an adaptive time-frequency filter with DWT technique, which decomposes user interests into multiple signals with different frequency and time, and can automatically learn weights of these signals. Furthermore, we develop DWTRec, a model for sequential recommendation all based on the adaptive time-frequency filter. Thanks to fast DWT technique, DWTRec has a lower time complexity and space complexity theoretically, and is Proficient in modeling long sequences. Experiments show that our model outperforms state-of-the-art baseline models in datasets with different domains, sparsity levels and average sequence lengths. Especially, our model shows great performance increase in contrast with previous models when the sequence grows longer, which demonstrates another advantage of our model.","authors":["Sheng Lu","Mingxi Ge","Jiuyi Zhang","Wanli Zhu","Guanjin Li","Fangming Gu"],"pdf_url":"http://arxiv.org/abs/2503.23436","paper_id":"2503.23436","link":"/paper/filtering-with-time-frequency-analysis-an","publication_date":"Mar 30, 2025","raw_publication_date":"2025-03-30","submission_date":"Mar 30, 2025","images":[],"arxiv_comment":null,"journal_ref":null,"code_available":false,"slug":"filtering-with-time-frequency-analysis-an"},{"title":"From Content Creation to Citation Inflation: A GenAI Case Study","content":"This paper investigates the presence and impact of questionable, AI-generated academic papers on widely used preprint repositories, with a focus on their role in citation manipulation. Motivated by suspicious patterns observed in publications related to our ongoing research on GenAI-enhanced cybersecurity, we identify clusters of questionable papers and profiles. These papers frequently exhibit minimal technical content, repetitive structure, unverifiable authorship, and mutually reinforcing citation patterns among a recurring set of authors. To assess the feasibility and implications of such practices, we conduct a controlled experiment: generating a fake paper using GenAI, embedding citations to suspected questionable publications, and uploading it to one such repository (ResearchGate). Our findings demonstrate that such papers can bypass platform checks, remain publicly accessible, and contribute to inflating citation metrics like the H-index and i10-index. We present a detailed analysis of the mechanisms involved, highlight systemic weaknesses in content moderation, and offer recommendations for improving platform accountability and preserving academic integrity in the age of GenAI.","authors":["Haitham S. Al-Sinani","Chris J. Mitchell"],"pdf_url":"http://arxiv.org/abs/2503.23414","paper_id":"2503.23414","link":"/paper/from-content-creation-to-citation-inflation-a","publication_date":"Mar 30, 2025","raw_publication_date":"2025-03-30","submission_date":"Mar 30, 2025","images":[],"arxiv_comment":"20 pages","journal_ref":null,"code_available":false,"slug":"from-content-creation-to-citation-inflation-a"}],"total":200,"topicBlurb":"Recommendation is the task of providing personalized suggestions to users based on their preferences and behavior.","similarResults":false,"query":"Recommendation","userHasHiddenBanner":false,"isMobile":false,"currentBrowser":"","canonicalUrl":"https://www.catalyzex.com/s/Recommendation"},"__N_SSP":true},"page":"/search","query":{"query":"Recommendation"},"buildId":"rcP1HS6ompi8ywYpLW-WW","isFallback":false,"isExperimentalCompile":false,"gssp":true,"scriptLoader":[]}</script></body></html>

Pages: 1 2 3 4 5 6 7 8 9 10