CINXE.COM

【机器学习-监督学习】双线性模型-腾讯云开发者社区-腾讯云

<!DOCTYPE html><html lang="zh" munual-autotracker-init="" qct-uid="" qct-pv-id="OM6y5UPsk4aKf9zl_jLcL" qct-ip="8.222.208.146"><head><meta charSet="UTF-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge"/><meta name="format-detection" content="telephone=no"/><title>【机器学习-监督学习】双线性模型-腾讯云开发者社区-腾讯云</title><meta name="subject" content="通用技术-人工智能技术-监督学习,其他-空类-函数,其他-空类-模型,其他-空类-数据,通用技术-人工智能技术-机器学习"/><meta name="subjectTime" content="2025-01-22 23:18:58"/><meta name="articleSource" content="B"/><meta name="magicSource" content="N"/><meta name="authorType" content="Z"/><meta name="productSlug" content="ti"/><meta name="keywords" content="监督学习,函数,模型,数据,机器学习"/><meta name="description" content="  从本文开始,我们介绍参数化模型中的非线性模型。在前几篇文章中,我们介绍了线性回归与逻辑斯谛回归模型。这两个模型都有一个共同的特征:包含线性预测因子"/><meta property="og:title" content="【机器学习-监督学习】双线性模型-腾讯云开发者社区-腾讯云"/><meta property="og:description" content="  从本文开始,我们介绍参数化模型中的非线性模型。在前几篇文章中,我们介绍了线性回归与逻辑斯谛回归模型。这两个模型都有一个共同的特征:包含线性预测因子"/><meta property="og:image" content="https://cloudcache.tencentcs.com/open_proj/proj_qcloud_v2/gateway/shareicons/cloud.png"/><meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1, viewport-fit=cover"/><link rel="dns-prefetch" href="//qccommunity.qcloudimg.com"/><link rel="dns-prefetch" href="//ask.qcloudimg.com"/><link rel="dns-prefetch" href="//cloudcache.tencentcs.com"/><link rel="canonical" href="https://cloud.tencent.com/developer/article/2490777"/><meta name="next-head-count" content="20"/><link rel="stylesheet" href="https://imgcache.qq.com/open_proj/proj_qcloud_v2/gateway/portal/css/base.css"/><link rel="stylesheet" href="https://cloudcache.tencentcs.cn/qcloud/ui/cloud-community/build/base/base-202502111512.css"/><style type="text/css"> .rno-markdown p * { white-space: pre-wrap; word-break: break-all; } .tea-overlay { z-index: 99999!important; } </style><link rel="stylesheet" href="https://cloudcache.tencentcs.cn/qcloud/ui/cloud-community/build/Article/Article-202502111516.css"/><link rel="stylesheet" href="https://cloudcache.tencent-cloud.cn/qcloud/draft-master/dist/draft-master-v2.1.2.d4s2ddo9sb.css"/><link rel="stylesheet" href="https://qccommunity-1258344699.cos.ap-guangzhou.myqcloud.com/tc_player/releasev5.1.0/tcplayer.min.css"/><script src="https://tam.cdn-go.cn/aegis-sdk/latest/aegis.min.js"></script><script> if (Aegis) { new Aegis({ id: 'dWlmyFvjDnalkbZO8q', env: 'production', onError: true, pagePerformance: true, reportAssetSpeed: true, api: { reportRequest: true, resHeaders: ['x-req-id'], }, reportApiSpeed: true, beforeRequest: function (data) { // load js failed if (data.logType === 'log') { if (data.logs.level === '32' && data.logs.msg.indexOf('google') > -1) return false; } var ignoreKeys = [ 'Script error', 'chrome-extension', 'qq.com', 'queryWeappQrcodeStatus', 'login/ajax/info', 'woa.com', 'trafficCollect.php', 'google', 'dscache', 'act-api', 'set_qc_cookie', 'opc.cloud.tencent.com', 'uc_gre_ad_buss', 'eb.xcj.pub', 'UCShellJava', '/developer/labs/quick/loader', 'edgeImmersiveReaderDOM', 'sendBeacon', 'error-decoder.html', 'qcloud-community-track.min.js' ]; var alarmMsg = [data.logs.url, data.logs.msg].join('|'); for (var i = 0; i < ignoreKeys.length; i++) { if (alarmMsg.indexOf(ignoreKeys[i]) != -1) return false; } if (/bot|wechatdevtools|spider/i.test(navigator.userAgent)) { return false; } if (location.hostname.indexOf('cloud.tencent.') === -1) { return false; } }, }); } </script><link rel="preload" href="https://qccommunity.qcloudimg.com/community/_next/static/css/15864e0bbdb1e0dc.css" as="style"/><link rel="stylesheet" href="https://qccommunity.qcloudimg.com/community/_next/static/css/15864e0bbdb1e0dc.css" data-n-g=""/><link rel="preload" href="https://qccommunity.qcloudimg.com/community/_next/static/css/cb2973c13eafc770.css" as="style"/><link rel="stylesheet" href="https://qccommunity.qcloudimg.com/community/_next/static/css/cb2973c13eafc770.css" data-n-p=""/><link rel="preload" href="https://qccommunity.qcloudimg.com/community/_next/static/css/76e9fe126c1f99d5.css" as="style"/><link rel="stylesheet" href="https://qccommunity.qcloudimg.com/community/_next/static/css/76e9fe126c1f99d5.css" data-n-p=""/><link rel="preload" href="https://qccommunity.qcloudimg.com/community/_next/static/css/6bae135cca0bd2a2.css" as="style"/><link rel="stylesheet" href="https://qccommunity.qcloudimg.com/community/_next/static/css/6bae135cca0bd2a2.css" data-n-p=""/><noscript data-n-css=""></noscript><script defer="" nomodule="" src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/polyfills-c67a75d1b6f99dc8.js"></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/webpack-ecb33d6f281c3863.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/framework-bae252e255276064.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/main-b171722e7f1a1add.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/pages/_app-4d4d0038fd3b92af.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/3598-9ce47f9460fcd2e8.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/900-754acf570f26d1b3.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/5711-8b004fba753fc2ae.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/1522-ac3b1777b1c36fd6.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/4031-e3e3905dc88f3f92.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/9142-8d6871ee01ef3752.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/2929-2732ed4070c148cd.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/6213-23c1f09036ef74b2.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/3751-d6902cb0fa3abff0.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/9159-237e4ba66a152bd5.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/4036-6e337191f9b1f6d5.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/1126-c4d27d89bdcaa920.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/432-8c085b26d42596c0.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/2654-a4137ccd8ac059bd.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/856-06c42ea61ae5c3f2.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/9179-8e6d979e9f59fb65.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/4698-c5f3e13cdb54e8e1.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/6326-791c8ec3fe3e7c9f.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/8899-a236cb0292d33b84.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/4989-ab5859ddb53d3104.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/9621-3128a32b4a60c481.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/9156-e37c75a3b314bfda.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/chunks/pages/article/%5BarticleId%5D-f1fa4a2b494e4989.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/9hp3loRhW5K95-TQOzEpv/_buildManifest.js" defer=""></script><script src="https://qccommunity.qcloudimg.com/community/_next/static/9hp3loRhW5K95-TQOzEpv/_ssgManifest.js" defer=""></script></head><body class=""><div id="__next"><script src="https://dscache.tencent-cloud.cn/ecache/qcstat/qcloud/qcloudStatApi.js"></script><script src="https://qccommunity.qcloudimg.com/common/exposure-plugin-4.1.15.min.js"></script><script src="https://qccommunity.qcloudimg.com/community-track/qcloud-community-track.min.js"></script><div class="cdc-responsive-layout" aegis-first-screen-timing="true" qct-area="文章详情页"><div class="cdc-article-page cdc-global"><div class="cdc-sticky-header mod-sticky-header is-hidden" style="left:0"><div id="community-top-sticky-header-product-container"></div><div class="cdc-sticky-header__inner"><div class="cdc-sticky-header__main"><div class="mod-sticky-header__main"><div class="mod-sticky-header__author"><span class="cdc-avatar circle"><a class="cdc-avatar__inner" style="background-image:url(https://developer.qcloudimg.com/http-save/10011/85f1d5aca973f3d501211269bdba7e3c.jpg)" href="/developer/user/11457362" target="_blank"></a></span><div class="author-info"><a class="author-info__name" href="/developer/user/11457362" target="_blank"><span class="name-text">Francek Chen</span></a></div></div><div class="mod-sticky-header__split"></div><div class="mod-sticky-header__title"><div class="mod-sticky-header__title-content"><h2 class="title-text">【机器学习-监督学习】双线性模型</h2></div></div></div></div><div class="cdc-sticky-header__extra"><div class="mod-sticky-header__operates"><button class="cdc-btn mod-sticky-header__operate btn-focus cdc-btn--primary"><i class="add-icon"></i><span>关注作者</span></button></div></div></div></div><div class="cdc-m-guider-banner"><div class="cdc-m-guider-banner__guide-mvp is-detail-page"><div class="cdc-m-guider-banner__guide-mvp-text">前往小程序,Get<em>更优</em>阅读体验!</div><div class="cdc-m-guider-banner__guide-mvp-btn">立即前往</div></div></div><div class="cdc-header cdc-header--block" track="导航"><div class="cdc-header__placeholder"></div><div class="cdc-header__inner"><div id="community-top-header-product-container"></div><div class="cdc-header__top"><div class="cdc-header__top-left"><a href="/?from=20060&amp;from_column=20060" target="_blank" class="cdc-header__top-logo" hotrep="" track="腾讯云官网入口"><i>腾讯云</i></a><div class="cdc-header__top-line"></div><a href="/developer" class="cdc-header__top-logo community"><i>开发者社区</i></a><div class="cdc-header__activity"><div id="cloud-header-product-container"></div></div></div><div class="cdc-header__top-operates"><a href="/document/product?from=20702&amp;from_column=20702" target="_blank" class="cdc-header__link">文档</a><a href="/voc/?from=20703&amp;from_column=20703" target="_blank" class="cdc-header__link">建议反馈</a><a href="https://console.cloud.tencent.com?from=20063&amp;from_column=20063" target="_blank" class="cdc-header__link" track-click="{&quot;areaId&quot;:102001,&quot;subAreaId&quot;:1}">控制台</a><div class="cdc-header__account"><div class="cdc-header__account-inner"><button class="cdc-btn cdc-header__account-btn cdc-btn--primary">登录/注册</button></div></div></div></div><div class="cdc-header__bottom"><div class="cdc-header__bottom-nav"><a href="/developer" class="cdc-header__bottom-home">首页</a><div class="cdc-header__nav-list"><div class="cdc-header__nav-item">学习</div><div class="cdc-header__nav-item">活动</div><div class="cdc-header__nav-item">专区</div><div class="cdc-header__nav-item">工具</div></div><a href="/tvp?from=20154&amp;from_column=20154" class="cdc-header__tvp" target="_blank">TVP</a><div class="cdc-header__activity"><a class="cdc-header__activity-tit" href="/developer/program/tm" target="_blank">腾讯云架构师技术同盟<div class="cdc-badge"><div class="cdc-badge-inner"><div class="cdc-badge-text"></div></div></div></a></div><div id="community-header-product-container"></div></div><div class="cdc-header__bottom-operates"><div class="cdc-header__search"><div class="cdc-search__wrap"><div class="cdc-search"><span class="cdc-search__text">文章/答案/技术大牛</span><button class="cdc-search__btn">搜索<i class="cdc-search__i search"></i></button></div><div class="cdc-search__dropdown"><div class="cdc-search__bar"><input type="text" class="cdc-search__bar-input" placeholder="文章/答案/技术大牛" value=""/><div class="cdc-search__bar-btns"><button class="cdc-search__btn">搜索<i class="cdc-search__i search"></i></button><button class="cdc-search__btn">关闭<i class="cdc-search__i clear"></i></button></div></div></div></div></div><div class="cdc-header__create"><span class="cdc-header__create-btn not-logged"><span class="cdc-svg-icon-con"><span class="cdc-svg-icon" style="width:16px;height:16px"><svg width="16" height="16" viewBox="0 0 16 16" fill="currentcolor" xmlns="http://www.w3.org/2000/svg"><path d="M14.2466 12.0145C14.1698 13.6258 12.8381 14.9131 11.2129 14.9131H11.1579H4.0927H4.03772C2.4125 14.9131 1.08014 13.6258 1.00334 12.0145H1V11.8668V4.07213V4.04627V3.89922H1.00334C1.08014 2.28732 2.4125 1 4.03772 1H9.6473V1.00069H10.0786L8.7688 2.10773H8.43888H7.7916H6.37904H4.03772C2.97234 2.10773 2.10445 2.9777 2.10445 4.04629V4.41869V4.4472V6.39498V11.4269V11.4309V11.8668C2.10445 12.9354 2.97234 13.8053 4.03772 13.8053H6.37904H8.87153H11.2129C12.2782 13.8053 13.1461 12.9355 13.1461 11.8668V11.466V11.454V9.5181V6.39364L14.2506 5.3051V11.8668V12.0145H14.2466ZM10.4324 7.15226L9.63146 7.99761C9.36577 8.2693 8.69326 8.95104 8.48066 9.17631C8.26726 9.40288 8.09039 9.58901 7.95061 9.73544C7.81079 9.88188 7.72667 9.96597 7.70083 9.98656C7.63321 10.0488 7.55703 10.1144 7.47022 10.1846C7.38412 10.2542 7.29404 10.3099 7.20063 10.3516C7.10722 10.4007 6.97072 10.459 6.79049 10.5305C6.61028 10.6001 6.42213 10.6676 6.22468 10.7339C6.02792 10.8002 5.84109 10.8571 5.66484 10.9061C5.48795 10.9538 5.3561 10.9863 5.2693 11.0009C5.08977 11.0214 4.96988 10.993 4.90956 10.9168C4.84931 10.8405 4.83276 10.7107 4.85924 10.5312C4.87315 10.4331 4.9043 10.292 4.95468 10.1078C5.00431 9.92297 5.05802 9.7315 5.11431 9.53341C5.1713 9.33526 5.22629 9.15179 5.27926 8.98484C5.33297 8.8179 5.37599 8.7026 5.40978 8.64032C5.44953 8.54357 5.49463 8.45413 5.54495 8.37399C5.59465 8.29379 5.66616 8.20503 5.75965 8.10766C5.79934 8.06588 5.89281 7.96649 6.03988 7.81018C6.18624 7.65311 6.80114 7.02774 7.02104 6.79783L7.75117 6.03524L8.56212 5.1899L10.6345 3.02466L12.5214 4.93874L10.4324 7.15226ZM13.816 3.58581C13.7166 3.68987 13.6272 3.78064 13.5483 3.85883C13.4694 3.93703 13.4006 4.0066 13.3423 4.06686C13.276 4.13643 13.2144 4.19738 13.1561 4.24903L11.2785 2.33569C11.3785 2.24025 11.4965 2.12565 11.6336 1.99115C11.7707 1.85668 11.8854 1.75061 11.9761 1.67242C12.0934 1.57708 12.2133 1.51013 12.3385 1.47109C12.4525 1.43529 12.5644 1.41805 12.6751 1.41876H12.7056C12.7665 1.42139 12.8268 1.42729 12.8851 1.43724C12.8838 1.4366 12.8811 1.43724 12.8798 1.4366C12.8811 1.4366 12.8838 1.4366 12.8851 1.43724C13.1376 1.48428 13.4019 1.62009 13.6265 1.83743C13.7511 1.95871 13.8524 2.09382 13.9259 2.23296C14.0346 2.43834 14.0863 2.65304 14.0763 2.8491C14.0763 2.87294 14.0783 2.89748 14.0783 2.92201C14.0783 3.03529 14.0571 3.14789 14.0154 3.26055C13.9737 3.37314 13.9067 3.48185 13.816 3.58581Z" fill="#0052D9"></path></svg></span></span>发布<span class="cdc-svg-icon-con cdc-header__create-btn-arrow"><span class="cdc-svg-icon" style="width:16px;height:16px"><svg width="16" height="16" viewBox="0 0 16 16" fill="currentcolor" xmlns="http://www.w3.org/2000/svg"><path d="M8.16377 4L9.57798 5.41421L14.5277 10.364L13.1135 11.7782L8.1638 6.829L3.21402 11.7782L1.7998 10.364L8.16377 4Z"></path></svg></span></span></span></div></div></div></div></div><div class="cdc-m-sticky-header is-hidden is-sticky"><div class="cdc-m-sticky-header__placeholder"></div><div class="cdc-m-sticky-header__main"><div class="cdc-m-sticky-header__con"><div class="cdc-m-sticky-header__trigger"></div><div class="cdc-m-header-article__menu"><div class="cdc-m-header-article__menu-mask"></div><div class="cdc-m-header-article__menu-side"><div class="cdc-m-header__sidebar"><div class="cdc-m-header__sidebar-top"><i class="cdc-m-header__sidebar-top__logo"></i><span class="cdc-m-header__sidebar-top__back"></span></div><div class="cdc-m-header__sidebar-menus"><a href="/developer" class="cdc-m-header__sidebar-menu link">首页</a><div class="tpm1-collapse"><div class="tpm1-collapse__panel"><header class="tpm1-collapse__panel-hd"><div class="tpm1-collapse__panel-title">学习</div></header></div><div class="tpm1-collapse__panel"><header class="tpm1-collapse__panel-hd"><div class="tpm1-collapse__panel-title">活动</div></header></div><div class="tpm1-collapse__panel"><header class="tpm1-collapse__panel-hd"><div class="tpm1-collapse__panel-title">专区</div></header></div><div class="tpm1-collapse__panel"><header class="tpm1-collapse__panel-hd"><div class="tpm1-collapse__panel-title">工具</div></header></div></div><a href="/tvp?from=20154&amp;from_column=20154" class="cdc-m-header__sidebar-menu link">TVP</a><a class="cdc-m-header__sidebar-activity" href="/developer/program/tm" target="_blank">腾讯云架构师技术同盟<div class="cdc-badge"><div class="cdc-badge-inner"><div class="cdc-badge-text"></div></div></div></a></div><div class="cdc-m-header__sidebar-back"><a href="/?from=20060&amp;from_column=20060" class="cdc-m-header__sidebar-back__link"><i></i><span>返回腾讯云官网</span></a></div></div></div></div><div class="cdc-m-sticky-header__author"><span class="cdc-avatar large circle" style="cursor:unset"><span class="cdc-avatar__inner" style="background-image:url(https://developer.qcloudimg.com/http-save/10011/85f1d5aca973f3d501211269bdba7e3c.jpg)"></span></span><div class="cdc-m-sticky-header__author-name">Francek Chen</div></div></div><div class="cdc-m-sticky-header__extra"><div class="cdc-m-sticky-header__extra-icon"><i class="extra-search"></i></div><div class="cdc-m-sticky-header__extra-icon"><i class="extra-share"></i></div><div class="cdc-m-sticky-header__extra-operate"><div class="cdc-m-sticky-header__extra-icon"><i class="extra-man"></i></div></div></div></div></div><div class="cdc-m-header-article"><div class="cdc-m-header-article__placeholder"></div><div class="cdc-m-header-article__content"><div class="cdc-m-header-article__main"><div class="cdc-m-header-article__con"><div class="cdc-m-header-article__trigger"></div><div class="cdc-m-header-article__menu"><div class="cdc-m-header-article__menu-mask"></div><div class="cdc-m-header-article__menu-side"><div class="cdc-m-header__sidebar"><div class="cdc-m-header__sidebar-top"><i class="cdc-m-header__sidebar-top__logo"></i><span class="cdc-m-header__sidebar-top__back"></span></div><div class="cdc-m-header__sidebar-menus"><a href="/developer" class="cdc-m-header__sidebar-menu link">首页</a><div class="tpm1-collapse"><div class="tpm1-collapse__panel"><header class="tpm1-collapse__panel-hd"><div class="tpm1-collapse__panel-title">学习</div></header></div><div class="tpm1-collapse__panel"><header class="tpm1-collapse__panel-hd"><div class="tpm1-collapse__panel-title">活动</div></header></div><div class="tpm1-collapse__panel"><header class="tpm1-collapse__panel-hd"><div class="tpm1-collapse__panel-title">专区</div></header></div><div class="tpm1-collapse__panel"><header class="tpm1-collapse__panel-hd"><div class="tpm1-collapse__panel-title">工具</div></header></div></div><a href="/tvp?from=20154&amp;from_column=20154" class="cdc-m-header__sidebar-menu link">TVP</a><a class="cdc-m-header__sidebar-activity" href="/developer/program/tm" target="_blank">腾讯云架构师技术同盟<div class="cdc-badge"><div class="cdc-badge-inner"><div class="cdc-badge-text"></div></div></div></a></div><div class="cdc-m-header__sidebar-back"><a href="/?from=20060&amp;from_column=20060" class="cdc-m-header__sidebar-back__link"><i></i><span>返回腾讯云官网</span></a></div></div></div></div></div><div class="cdc-m-header-article__title"><div class="cdc-m-header-article__title-logo"></div></div><div class="cdc-m-header-article__extra"><div class="cdc-m-header-article__extra-icon"><i class="extra-search"></i></div><div class="cdc-m-header-article__extra-operate"><div class="cdc-m-header-article__extra-icon"><i class="extra-man"></i></div></div></div></div></div></div><div class="cdc-global__main"><div class="cdc-article__body"><div class="cdc-layout"><div class="cdc-layout__main"><div class="cdc-crumb mod-crumb"><div class="cdc-crumb__inner"><a class="cdc-crumb__item" href="/developer">社区首页</a><span class="cdc-crumb__split"> &gt;</span><a class="cdc-crumb__item" href="/developer/column">专栏</a><span class="cdc-crumb__split"> &gt;</span><span class="cdc-crumb__item current">【机器学习-监督学习】双线性模型</span></div></div><div class="mod-article-content"><div class="mod-header"><div class="mod-header__top"><div class="mod-header__title"><h1 class="title-text">【机器学习-监督学习】双线性模型</h1></div></div><div class="mod-article-source header"><div class="mod-article-source__main"><div class="mod-article-source__avatar"><img src="https://developer.qcloudimg.com/http-save/10011/85f1d5aca973f3d501211269bdba7e3c.jpg" alt="作者头像"/></div><div class="mod-article-source__detail"><div class="mod-article-source__name"><span>Francek Chen</span></div></div><button class="cdc-btn mod-article-source__operate cdc-btn--primary"><span><i></i>关注</span></button></div></div><div class="mod-header__bottom"><div class="mod-header__detail"><div class="mod-header__date"><span class="date-text">发布<!-- -->于 <!-- -->2025-01-22 23:18:58</span></div><div class="mod-header__date is-mobile"><span class="date-text">发布<!-- -->于 <!-- -->2025-01-22 23:18:58</span></div><div class="mod-header__infos"><div class="cdc-icon__list"><span class="cdc-svg-icon-con"><span class="cdc-svg-icon" style="width:16px;height:16px"><svg width="16" height="16" viewBox="0 0 16 16" fill="currentcolor" xmlns="http://www.w3.org/2000/svg"><g id="icon-view" transform="translate(0.000000, 3.000000)" fill="currentcolor" fill-rule="nonzero"><path d="M15.885,4.68036 C14.9951,3.57569 11.7987,-0.004272 7.99883,-0.004272 C4.19895,-0.004272 1.02302,3.57569 0.112682,4.68036 C0.040058,4.77107 0.000488281,4.88381 0.000488281,5 C0.000488281,5.1162 0.040058,5.22894 0.112682,5.31964 C1.00767,6.42432 4.20407,10.0043 7.99883,10.0043 C11.7936,10.0043 14.9951,6.42432 15.885,5.31964 C15.9576,5.22894 15.9972,5.1162 15.9972,5 C15.9972,4.88381 15.9576,4.77107 15.885,4.68036 Z M7.99883,8.97632 C4.93029,8.97632 2.25555,6.25043 1.17644,4.99745 C2.25555,3.74446 4.95586,1.01857 7.99883,1.01857 C11.0418,1.01857 13.7421,3.74446 14.8314,4.99745 C13.7421,6.25043 11.0418,8.97632 7.99883,8.97632 Z" id="形状"></path><path d="M7.97304,2.55286 C7.49865,2.55286 7.03491,2.69353 6.64046,2.95709 C6.24602,3.22065 5.93859,3.59525 5.75704,4.03354 C5.5755,4.47182 5.528,4.95409 5.62055,5.41937 C5.7131,5.88465 5.94154,6.31203 6.27699,6.64748 C6.61244,6.98293 7.03982,7.21137 7.5051,7.30392 C7.97038,7.39647 8.45265,7.34897 8.89093,7.16743 C9.32922,6.98588 9.70382,6.67845 9.96738,6.28401 C10.2309,5.88956 10.3716,5.42582 10.3716,4.95143 C10.3716,4.31529 10.1189,3.7052 9.66909,3.25538 C9.21927,2.80556 8.60918,2.55286 7.97304,2.55286 Z M7.97304,6.32716 C7.70095,6.32716 7.43496,6.24647 7.20872,6.09531 C6.98249,5.94414 6.80616,5.72928 6.70203,5.4779 C6.59791,5.22652 6.57066,4.94991 6.62374,4.68304 C6.67683,4.41617 6.80785,4.17104 7.00025,3.97864 C7.19265,3.78625 7.43778,3.65522 7.70465,3.60214 C7.97151,3.54905 8.24813,3.5763 8.49951,3.68042 C8.75089,3.78455 8.96575,3.96088 9.11692,4.18712 C9.26808,4.41335 9.34877,4.67934 9.34877,4.95143 C9.35012,5.13295 9.31553,5.31295 9.247,5.48104 C9.17846,5.64913 9.07734,5.802 8.94946,5.93084 C8.82158,6.05967 8.66946,6.16192 8.50188,6.2317 C8.3343,6.30147 8.15457,6.33739 7.97304,6.33739 L7.97304,6.32716 Z" id="形状"></path></g></svg></span><span class="cdc-svg-icon-text">76</span></span><span class="cdc-svg-icon-con is-comment"><span class="cdc-svg-icon" style="width:16px;height:16px"><svg width="16" height="16" viewBox="0 0 16 16" fill="currentcolor" xmlns="http://www.w3.org/2000/svg"><path fill-rule="evenodd" clip-rule="evenodd" d="M8 13.414L5.58594 11H2V3H14V11H10.4141L8 13.414ZM5.17175 12L8 14.8282L10.8282 12H15V2H1V12H5.17175ZM4 6C3.44775 6 3 6.44769 3 7C3 7.55231 3.44775 8 4 8C4.55225 8 5 7.55231 5 7C5 6.44769 4.55225 6 4 6ZM7 7C7 6.44769 7.44775 6 8 6C8.55225 6 9 6.44769 9 7C9 7.55231 8.55225 8 8 8C7.44775 8 7 7.55231 7 7ZM12 6C11.4478 6 11 6.44769 11 7C11 7.55231 11.4478 8 12 8C12.5522 8 13 7.55231 13 7C13 6.44769 12.5522 6 12 6Z"></path></svg></span><span class="cdc-svg-icon-text">0</span></span></div></div></div><div class="mod-header__operates"><div class="mod-header__operate"><span class="cdc-svg-icon-con is-operate"><span class="cdc-svg-icon" style="width:16px;height:16px"><svg width="16" height="16" viewBox="0 0 16 16" fill="currentcolor" xmlns="http://www.w3.org/2000/svg"><path fill-rule="evenodd" clip-rule="evenodd" d="M9.21101 2.54545C8.80733 1.81818 7.79814 1.81818 7.39446 2.54545L1.94481 12.3636C1.54113 13.0909 2.04573 14 2.85308 14H13.7524C14.5597 14 15.0643 13.0909 14.6607 12.3636L9.21101 2.54545ZM2.85308 12.9091L8.30273 3.09091L13.7524 12.9091H2.85308ZM8.00037 6H9.00037V10H8.00037V6ZM8.00037 11H9.00037V12H8.00037V11Z" fill="currentcolor"></path></svg></span><span class="cdc-svg-icon-text">举报</span></span></div></div></div><div class="mod-header__special"><div class="cdc-special-guide"><span><i class="cdc-special-guide-icon"></i>文章被收录于专栏:</span><a class="cdc-special-guide-name">智能大数据分析</a></div></div></div><div class="mod-content"><div class="mod-content__markdown"><div><div class="rno-markdown new-version rno-"><p>机器学习是一门人工智能的分支学科,通过算法和模型让计算机从数据中学习,进行模型训练和优化,做出预测、分类和决策支持。Python成为机器学习的首选语言,依赖于强大的开源库如Scikit-learn、TensorFlow和PyTorch。本专栏介绍机器学习的相关算法以及基于Python的算法实现。</p><p> 【GitCode】专栏资源保存在我的GitCode仓库:<a style="color:#0052D9" class="" href="/developer/tools/blog-entry?target=https%3A%2F%2Fgitcode.com%2FMorse_Chen%2FPython_machine_learning&amp;objectId=2490777&amp;objectType=1&amp;isNewArticle=undefined" qct-click="" qct-exposure="" qct-area="链接-https://gitcode.com/Morse_Chen/Python_machine_learning">https://gitcode.com/Morse_Chen/Python_machine_learning</a>。</p><p>  从本文开始,我们介绍参数化模型中的非线性模型。在前几篇文章中,我们介绍了<a style="color:#0052D9" class="" href="https://cloud.tencent.com/developer/article/2490791?from_column=20421&amp;from=20421" qct-click="" qct-exposure="" qct-area="链接-线性回归">线性回归</a>与<a style="color:#0052D9" class="" href="https://cloud.tencent.com/developer/article/2490776?from_column=20421&amp;from=20421" qct-click="" qct-exposure="" qct-area="链接-逻辑斯谛回归">逻辑斯谛回归</a>模型。这两个模型都有一个共同的特征:包含线性预测因子</p><figure class=""><span>\boldsymbol\theta^\mathrm{T}\boldsymbol x</span></figure><p>。将该因子看作</p><figure class=""><span>\boldsymbol x</span></figure><p>的函数,如果输入</p><figure class=""><span>\boldsymbol x</span></figure><p>变为原来的</p><figure class=""><span>\lambda</span></figure><p>倍,那么输出为 </p><figure class=""><span>\boldsymbol\theta^\mathrm{T}(\lambda \boldsymbol x) = \lambda \boldsymbol\theta^\mathrm{T} \boldsymbol x</span></figure><p>,也变成原来的</p><figure class=""><span>\lambda</span></figure><p>倍。在<a style="color:#0052D9" class="" href="https://cloud.tencent.com/developer/article/2490776?from_column=20421&amp;from=20421" qct-click="" qct-exposure="" qct-area="链接-逻辑斯谛回归">逻辑斯谛回归</a>的扩展阅读中,我们将这一类模型都归为广义线性模型。然而,此类模型所做的线性假设在许多任务上并不适用,我们需要其他参数假设来导出更合适的模型。本文首先讲解在推荐系统领域很常用的双线性模型(bilinear model)。</p><p>  双线性模型虽然名称中包含“线性模型”,但并不属于线性模型或广义线性模型,其正确的理解应当是“双线性”模型。在数学中,双线性的含义为,二元函数固定任意一个自变量时,函数关于另一个自变量线性。具体来说,二元函数 </p><figure class=""><span>f \colon \mathbb{R}^n \times \mathbb{R}^m \to \mathbb{R}^l</span></figure><p> 是双线性函数,当且仅当对任意 </p><figure class=""><span>\boldsymbol u, \boldsymbol v \in \mathbb{R}^n, \boldsymbol s, \boldsymbol t \in \mathbb{R}^m, \lambda \in \mathbb{R}</span></figure><p> 都有:</p><ol class="ol-level-0"><li>​</li></ol><figure class=""><span>f(\boldsymbol u, \boldsymbol s + \boldsymbol t) = f(\boldsymbol u, \boldsymbol s) + f(\boldsymbol u, \boldsymbol t)</span></figure><ol class="ol-level-0"><li>​</li></ol><figure class=""><span>f(\boldsymbol u, \lambda \boldsymbol s) = \lambda f(\boldsymbol u, \boldsymbol s)</span></figure><ol class="ol-level-0"><li>​</li></ol><figure class=""><span>f(\boldsymbol u + \boldsymbol v, \boldsymbol s) = f(\boldsymbol u, \boldsymbol s) + f(\boldsymbol v, \boldsymbol s)</span></figure><ol class="ol-level-0"><li>​</li></ol><figure class=""><span>f(\lambda \boldsymbol u, \boldsymbol s) = \lambda f(\boldsymbol u, \boldsymbol s)</span></figure><p>  最简单的双线性函数的例子是向量内积 </p><figure class=""><span>\langle \cdot, \cdot \rangle</span></figure><p>,我们按定义验证前两条性质:</p><ul class="ul-level-0"><li>​</li></ul><figure class=""><span>\small\langle \boldsymbol u, \boldsymbol s + \boldsymbol t \rangle = \sum_i u_i(s_i+t_i) = \sum_i(u_is_i + u_it_i) = \sum_i u_is_i + \sum_i u_it_i = \langle \boldsymbol u,\boldsymbol s \rangle + \langle \boldsymbol u, \boldsymbol t\rangle</span></figure><ul class="ul-level-0"><li>​</li></ul><figure class=""><span>\small\langle \boldsymbol u, \lambda \boldsymbol s \rangle = \sum_i u_i(\lambda s_i) = \lambda \sum_i u_is_i = \lambda \langle \boldsymbol u, \boldsymbol s \rangle</span></figure><p>后两条性质由对称性,显然也是成立的。而向量的加法就不是双线性函数。虽然加法满足第1、3条性质,但对第2条,如果 </p><figure class=""><span>\boldsymbol u \neq \boldsymbol 0</span></figure><p> 且 </p><figure class=""><span>\lambda\neq 1</span></figure><p>,则有</p><figure class=""><span>\boldsymbol u + \lambda \boldsymbol s \neq \lambda (\boldsymbol u + \boldsymbol s)</span></figure><p>  与线性模型类似,双线性模型并非指模型整体具有双线性性质,而是指其包含双线性因子。该特性赋予模型拟合一些非线性数据模式的能力,从而得到更加精准预测性能。接下来,我们以推荐系统场景为例,介绍两个基础的双线性模型:矩阵分解模型和因子分解机。</p><h4 id="8131" name="%E4%B8%80%E3%80%81%E7%9F%A9%E9%98%B5%E5%88%86%E8%A7%A3">一、矩阵分解</h4><p> 矩阵分解(matrix factorization,MF)是推荐系统中评分预测(rating prediction)的常用模型,其任务为根据用户和商品已有的评分来预测用户对其他商品的评分。为了更清晰地解释MF模型的任务场景,我们以用户对电影的评分为例进行详细说明。如图1所示,设想有</p><figure class=""><span>N</span></figure><p>个用户和</p><figure class=""><span>M</span></figure><p>部电影,每个用户对一些电影按自己的喜好给出了评分。现在,我们的目标是需要为用户从他没有看过的电影中,向他推荐几部他最有可能喜欢看的电影。理想情况下,如果这个用户对所有电影都给出了评分,那么这个任务就变为从已有评分的电影中进行推荐——直接按照用户打分的高低排序。但实际情况下,在浩如烟海的电影中,用户一般只对很小一部分电影做了评价。因此,我们需要从用户已经做出的评价中推测用户为其他电影的打分,再将电影按推测的打分排序,从中选出最高的几部推荐给该用户。</p><figure class=""><div class="rno-markdown-img-url" style="text-align:center"><div class="rno-markdown-img-url-inner" style="width:100%"><div style="width:100%"><img src="https://developer.qcloudimg.com/http-save/yehe-11457362/240b595d4c80e69673e4490b2a24fb60.png" style="width:100%"/></div></div></div></figure><p> 图1 用户对电影的评分矩阵 </p><p>  我们继续从生活经验出发来思考这一问题。假设某用户为一部电影打了高分,那么可以合理猜测,该用户喜欢这部电影的某些特征。例如,电影的类型是悬疑、爱情、战争或是其他种类;演员、导演和出品方分别是哪些;叙述的故事发生在什么年代;时长是多少,等等。假如我们有一个电影特征库,可以将每部电影用一个特征向量表示。向量的每一维代表一种特征,值代表电影具有这一特征的程度。同时,我们还可以构建一个用户画像库,包含每个用户更偏好哪些类型的特征,以及偏好的程度。假设特征的个数是</p><figure class=""><span>d</span></figure><p>,那么所有电影的特征构成的矩阵是 </p><figure class=""><span>\boldsymbol P \in \mathbb{R}^{M \times d}</span></figure><p>,用户喜好构成的矩阵是 </p><figure class=""><span>\boldsymbol Q \in \mathbb{R}^{N \times d}</span></figure><p>。图2给出了两个矩阵的示例。</p><figure class=""><div class="rno-markdown-img-url" style="text-align:center"><div class="rno-markdown-img-url-inner" style="width:100%"><div style="width:100%"><img src="https://developer.qcloudimg.com/http-save/yehe-11457362/dde28bd6f3bd359d71bd86b6e78f6338.png" style="width:100%"/></div></div></div></figure><p> 图2 电影和用户的隐变量矩阵 </p><p>  需要说明的是,我们实际上分解出的矩阵只是某种交互结果背后的隐变量,并不一定对应真实的特征。这样,我们就把一个用户与电影交互的矩阵拆分成了用户、电影两个矩阵,并且这两个矩阵中包含了更多的信息。最后,用这两个矩阵的乘积 </p><figure class=""><span>\boldsymbol R = \boldsymbol P^\mathrm{T} \boldsymbol Q</span></figure><p> 可以还原出用户对电影的评分。即使用户对某部电影并没有打分,我们也能通过矩阵乘积,根据用户喜欢的特征和该电影具有的特征,预测出用户对电影的喜好程度。</p><blockquote><p> <strong>小故事</strong>   矩阵分解和下面要介绍的因子分解机都属于推荐系统(recommender system)领域的算法。我们在日常使用软件、浏览网站的时候,软件或网站会记录下来我们感兴趣的内容,并在接下来更多地为我们推送同类型的内容。例如,如果我们在购物网站上浏览过牙刷,它就可能再给我们推荐牙刷、毛巾、脸盆等等相关性比较大的商品,这就是推荐系统的作用。推荐系统希望根据用户的特征、商品的特征、用户和商品的交互历史,为用户做出更符合个人喜好的个性化推荐,提高用户的浏览体验,同时为公司带来更高的经济效益。   机器学习界开始大量关注推荐系统任务源自美国奈飞电影公司(Netflix)于2006年举办的世界范围的推荐系统算法大赛。该比赛旨在探寻一种算法能更加精确地预测48万名用户对1.7万部电影的打分,如果某个参赛队伍给出的评分预测精度超过了基线算法10%,就可以获得100万美元的奖金。该竞赛在1年来吸引了来自全球186个国家的超过4万支队伍的参加,经过3年的“马拉松”竞赛,最终由一支名为BellKor’s Pragmatic Chaos的联合团队摘得桂冠。而团队中时任雅虎研究员的耶胡达·科伦(Yehuda Koren)则在后来成为了推荐系统领域最为著名的科学家之一,他使用的基于矩阵分解的双线性模型则成为了那个时代推荐系统的主流模型。 </p></blockquote><p>  实际上,我们通常能获取到的并不是</p><figure class=""><span>\boldsymbol P</span></figure><p>和</p><figure class=""><span>\boldsymbol Q</span></figure><p>,而是打分的结果</p><figure class=""><span>\boldsymbol R</span></figure><p>。并且由于一个用户只会对极其有限的一部分电影打分,矩阵</p><figure class=""><span>\boldsymbol R</span></figure><p>是非常稀疏的,绝大多数元素都是空白。因此,我们需要从</p><figure class=""><span>\boldsymbol R</span></figure><p>有限的元素中推测出用户的喜好</p><figure class=""><span>\boldsymbol P</span></figure><p>和电影的特征</p><figure class=""><span>\boldsymbol Q</span></figure><p>。MF模型利用矩阵分解的技巧完成了这一任务。设第</p><figure class=""><span>i</span></figure><p>个用户的偏好向量是</p><figure class=""><span>\boldsymbol p_i</span></figure><p>,第</p><figure class=""><span>j</span></figure><p>部电影的特征向量是</p><figure class=""><span>\boldsymbol q_j</span></figure><p>,其维度都是特征数</p><figure class=""><span>d</span></figure><p>。MF假设用户</p><figure class=""><span>i</span></figure><p>对电影</p><figure class=""><span>j</span></figure><p>的评分</p><figure class=""><span>r_{ij}</span></figure><p>是用户偏好与电影特征的内积,即 </p><figure class=""><span>r_{ij} = \boldsymbol p_i^\mathrm{T}\boldsymbol q_j</span></figure><p>。在本文开始已经讲过,向量内积是双线性函数,这也是MF模型属于双线性模型的原因。</p><p>  既然MF的目标是通过特征还原评分矩阵</p><figure class=""><span>\boldsymbol R</span></figure><p>,我们就以还原结果和</p><figure class=""><span>\boldsymbol R</span></figure><p>中已知部分的差距作为损失函数。记 </p><figure class=""><span>I_{ij} = \mathbb{I}(r_{ij}\text{存在})</span></figure><p>,即当用户为电影打过分时</p><figure class=""><span>I_{ij}</span></figure><p>为</p><figure class=""><span>1</span></figure><p>,否则为</p><figure class=""><span>0</span></figure><p>。那么损失函数可以写为</p><figure class=""><span>J(\boldsymbol P, \boldsymbol Q) = \sum_{i=1}^N\sum_{j=1}^M I_{ij}\mathcal{L}(\boldsymbol p_i^\mathrm{T}\boldsymbol q_j, r_{ij})</span></figure><p> 式中,</p><figure class=""><span>\mathcal{L}(\boldsymbol p_i^\mathrm{T}\boldsymbol q_j, r_{ij})</span></figure><p> 是模型预测和真实值之间的损失。一般情况下,我们就选用最简单的MSE作为损失,那么优化目标为</p><figure class=""><span>\min_{\boldsymbol P, \boldsymbol Q} J(\boldsymbol P, \boldsymbol Q) = \frac12\sum_{i=1}^N\sum_{j=1}^M I_{ij} (\boldsymbol p_i^\mathrm{T}\boldsymbol q_j - r_{ij})^2</span></figure><p> 再加入对</p><figure class=""><span>\boldsymbol P</span></figure><p>和</p><figure class=""><span>\boldsymbol Q</span></figure><p>的</p><figure class=""><span>L_2</span></figure><p>正则化约束,就得到总的优化目标:</p><figure class=""><span>\min_{\boldsymbol P, \boldsymbol Q} J(\boldsymbol P, \boldsymbol Q) = \frac12\sum_{i=1}^N\sum_{j=1}^M I_{ij} \left((\boldsymbol p_i^\mathrm{T}\boldsymbol q_j - r_{ij})^2 + \lambda(\|\boldsymbol p_i\|^2 +\\|\boldsymbol q_j\|^2)\right)</span></figure><p> 需要注意,这里的</p><figure class=""><span>L_2</span></figure><p>约束并非对整个矩阵</p><figure class=""><span>\boldsymbol P</span></figure><p>或者</p><figure class=""><span>\boldsymbol Q</span></figure><p>而言。我们知道,正则化的目的是通过限制参数的规模来约束模型的复杂度,使模型的复杂度与数据中包含的信息相匹配。以用户为例,假设不同用户直接的评分是独立的。如果用户甲给10部电影打了分,用户乙给2部电影打了分,那么数据中关于甲的信息就比乙多。反映到正则化上,对甲的参数的约束强度也应当比乙大。因此,总损失函数中</p><figure class=""><span>\boldsymbol p_i</span></figure><p>的正则化系数是</p><figure class=""><span>\frac{\lambda}{2}\sum\limits_{j=1}^M I_{ij}</span></figure><p>,在</p><figure class=""><span>\frac{\lambda}{2}</span></figure><p>的基础上又乘以用户</p><figure class=""><span>i</span></figure><p>评分的数量。对电影向量</p><figure class=""><span>\boldsymbol q_j</span></figure><p>也是同理。上式对</p><figure class=""><span>\boldsymbol p_{ik}</span></figure><p>和</p><figure class=""><span>\boldsymbol q_{jk}</span></figure><p>的梯度分别为</p><figure class=""><span>\begin{aligned} \nabla_{\boldsymbol p_{ik}} J(\boldsymbol P, \boldsymbol Q) &amp;= I_{ij} \left((\boldsymbol p_i^\mathrm{T}\boldsymbol q_j - r_{ij})\boldsymbol q_{jk} + \lambda\boldsymbol p_{ik} \right) \\[1ex] \nabla_{\boldsymbol q_{jk}} J(\boldsymbol P, \boldsymbol Q) &amp;= I_{ij} \left((\boldsymbol p_i^\mathrm{T}\boldsymbol q_j - r_{ij})\boldsymbol p_{ik} + \lambda\boldsymbol q_{jk} \right) \end{aligned}</span></figure><p>可以发现,上面</p><figure class=""><span>\boldsymbol p_{ik}</span></figure><p>梯度中含有</p><figure class=""><span>\boldsymbol q_{jk}</span></figure><p>,而</p><figure class=""><span>\boldsymbol q_{jk}</span></figure><p>的梯度中含有</p><figure class=""><span>\boldsymbol p_{ik}</span></figure><p>,两者互相包含,这是由双线性函数的性质决定的,也是双线性模型的一个重要特点。</p><h4 id="8242" name="%E4%BA%8C%E3%80%81%E5%8A%A8%E6%89%8B%E5%AE%9E%E7%8E%B0%E7%9F%A9%E9%98%B5%E5%88%86%E8%A7%A3">二、动手实现矩阵分解</h4><p>  下面,我们来动手实现矩阵分解模型。我们选用的数据集是推荐系统中的常用数据集MovieLens,其包含从电影评价网站<a style="color:#0052D9" class="" href="/developer/tools/blog-entry?target=https%3A%2F%2Fmovielens.org%2F&amp;objectId=2490777&amp;objectType=1&amp;isNewArticle=undefined" qct-click="" qct-exposure="" qct-area="链接-MovieLens">MovieLens</a>中收集的真实用户对电影的打分信息。简单起见,我们采用其包含来自943个用户对1682部电影的10万条样本的版本MovieLens-100k。我们对原始的数据进行了一些处理,现在数据集的每一行有3个数,依次表示用户编号</p><figure class=""><span>i</span></figure><p>、电影编号</p><figure class=""><span>j</span></figure><p>、用户对电影的打分</p><figure class=""><span>r_{ij}</span></figure><p>,其中 </p><figure class=""><span>1\le r_{ij}\le5</span></figure><p> 且三者都是整数。表1展示了数据集<code>movielens_100k.csv</code>中的3个样本,大家也可以从网站上下载更大的数据集,测试模型的预测效果。</p><p> 表1 MovieLens-100k数据集示例 </p><div class="table-wrapper"><table><thead><tr><th style="text-align:left"><div><div class="table-header"><p>用户编号</p></div></div></th><th style="text-align:left"><div><div class="table-header"><p>电影编号</p></div></div></th><th style="text-align:left"><div><div class="table-header"><p>评分</p></div></div></th></tr></thead><tbody><tr><td style="text-align:left"><div><div class="table-cell"><p>196</p></div></div></td><td style="text-align:left"><div><div class="table-cell"><p>242</p></div></div></td><td style="text-align:left"><div><div class="table-cell"><p>3</p></div></div></td></tr><tr><td style="text-align:left"><div><div class="table-cell"><p>186</p></div></div></td><td style="text-align:left"><div><div class="table-cell"><p>302</p></div></div></td><td style="text-align:left"><div><div class="table-cell"><p>3</p></div></div></td></tr><tr><td style="text-align:left"><div><div class="table-cell"><p>22</p></div></div></td><td style="text-align:left"><div><div class="table-cell"><p>377</p></div></div></td><td style="text-align:left"><div><div class="table-cell"><p>1</p></div></div></td></tr></tbody></table></div><div class="rno-markdown-code"><div class="rno-markdown-code-toolbar"><div class="rno-markdown-code-toolbar-info"><div class="rno-markdown-code-toolbar-item is-type"><span class="is-m-hidden">代码语言:</span>javascript</div><div class="rno-markdown-code-toolbar-item is-num"><i class="icon-code"></i><span class="is-m-hidden">代码</span>运行次数:<!-- -->0</div></div><div class="rno-markdown-code-toolbar-opt"><div class="rno-markdown-code-toolbar-copy"><i class="icon-copy"></i><span class="is-m-hidden">复制</span></div><button class="rno-markdown-code-toolbar-run"><i class="icon-run"></i><span class="is-m-hidden">Cloud Studio</span> 代码运行</button></div></div><div class="developer-code-block"><pre class="prism-token token line-numbers language-javascript"><code class="language-javascript" style="margin-left:0">!pip install tqdm import numpy as np import matplotlib.pyplot as plt from tqdm import tqdm # 进度条工具 data = np.loadtxt(&#x27;movielens_100k.csv&#x27;, delimiter=&#x27;,&#x27;, dtype=int) print(&#x27;数据集大小:&#x27;, len(data)) # 用户和电影都是从1开始编号的,我们将其转化为从0开始 data[:, :2] = data[:, :2] - 1 # 计算用户和电影数量 users = set() items = set() for i, j, k in data: users.add(i) items.add(j) user_num = len(users) item_num = len(items) print(f&#x27;用户数:{user_num},电影数:{item_num}&#x27;) # 设置随机种子,划分训练集与测试集 np.random.seed(0) ratio = 0.8 split = int(len(data) * ratio) np.random.shuffle(data) train = data[:split] test = data[split:] # 统计训练集中每个用户和电影出现的数量,作为正则化的权重 user_cnt = np.bincount(train[:, 0], minlength=user_num) item_cnt = np.bincount(train[:, 1], minlength=item_num) print(user_cnt[:10]) print(item_cnt[:10]) # 用户和电影的编号要作为下标,必须保存为整数 user_train, user_test = train[:, 0], test[:, 0] item_train, item_test = train[:, 1], test[:, 1] y_train, y_test = train[:, 2], test[:, 2]</code></pre></div></div><figure class=""><div class="rno-markdown-img-url" style="text-align:center"><div class="rno-markdown-img-url-inner" style="width:100%"><div style="width:100%"><img src="https://developer.qcloudimg.com/http-save/yehe-11457362/b558cc7eb01e06e8c846c6816ae32449.png" style="width:100%"/></div></div></div></figure><p>  然后,我们将MF模型定义成类,在其中实现梯度计算方法。根据上面的推导,模型的参数是用户喜好 </p><figure class=""><span>\boldsymbol P \in \mathbb{R}^{N\times d}</span></figure><p> 和电影特征 </p><figure class=""><span>\boldsymbol Q \in \mathbb{R}^{M \times d}</span></figure><p>,其中特征数</p><figure class=""><span>d</span></figure><p>是我们自己指定的超参数。在参数初始化部分,考虑到最终电影的得分都是正数,我们将参数都初始化为1。</p><div class="rno-markdown-code"><div class="rno-markdown-code-toolbar"><div class="rno-markdown-code-toolbar-info"><div class="rno-markdown-code-toolbar-item is-type"><span class="is-m-hidden">代码语言:</span>javascript</div><div class="rno-markdown-code-toolbar-item is-num"><i class="icon-code"></i><span class="is-m-hidden">代码</span>运行次数:<!-- -->0</div></div><div class="rno-markdown-code-toolbar-opt"><div class="rno-markdown-code-toolbar-copy"><i class="icon-copy"></i><span class="is-m-hidden">复制</span></div><button class="rno-markdown-code-toolbar-run"><i class="icon-run"></i><span class="is-m-hidden">Cloud Studio</span> 代码运行</button></div></div><div class="developer-code-block"><pre class="prism-token token line-numbers language-javascript"><code class="language-javascript" style="margin-left:0">class MF: def __init__(self, N, M, d): # N是用户数量,M是电影数量,d是特征维度 # 定义模型参数 self.user_params = np.ones((N, d)) self.item_params = np.ones((M, d)) def pred(self, user_id, item_id): # 预测用户user_id对电影item_id的打分 # 获得用户偏好和电影特征 user_param = self.user_params[user_id] item_param = self.item_params[item_id] # 返回预测的评分 rating_pred = np.sum(user_param * item_param, axis=1) return rating_pred def update(self, user_grad, item_grad, lr): # 根据参数的梯度更新参数 self.user_params -= lr * user_grad self.item_params -= lr * item_grad</code></pre></div></div><p>  下面定义训练函数,以SGD算法对MF模型的参数进行优化。对于回归任务来说,我们仍然以MSE作为损失函数,RMSE作为的评价指标。在训练的同时,我们将其记录下来,供最终绘制训练曲线使用。</p><div class="rno-markdown-code"><div class="rno-markdown-code-toolbar"><div class="rno-markdown-code-toolbar-info"><div class="rno-markdown-code-toolbar-item is-type"><span class="is-m-hidden">代码语言:</span>javascript</div><div class="rno-markdown-code-toolbar-item is-num"><i class="icon-code"></i><span class="is-m-hidden">代码</span>运行次数:<!-- -->0</div></div><div class="rno-markdown-code-toolbar-opt"><div class="rno-markdown-code-toolbar-copy"><i class="icon-copy"></i><span class="is-m-hidden">复制</span></div><button class="rno-markdown-code-toolbar-run"><i class="icon-run"></i><span class="is-m-hidden">Cloud Studio</span> 代码运行</button></div></div><div class="developer-code-block"><pre class="prism-token token line-numbers language-javascript"><code class="language-javascript" style="margin-left:0">def train(model, learning_rate, lbd, max_training_step, batch_size): train_losses = [] test_losses = [] batch_num = int(np.ceil(len(user_train) / batch_size)) with tqdm(range(max_training_step * batch_num)) as pbar: for epoch in range(max_training_step): # 随机梯度下降 train_rmse = 0 for i in range(batch_num): # 获取当前批量 st = i * batch_size ed = min(len(user_train), st + batch_size) user_batch = user_train[st: ed] item_batch = item_train[st: ed] y_batch = y_train[st: ed] # 计算模型预测 y_pred = model.pred(user_batch, item_batch) # 计算梯度 P = model.user_params Q = model.item_params errs = y_batch - y_pred P_grad = np.zeros_like(P) Q_grad = np.zeros_like(Q) for user, item, err in zip(user_batch, item_batch, errs): P_grad[user] = P_grad[user] - err * Q[item] + lbd * P[user] Q_grad[item] = Q_grad[item] - err * P[user] + lbd * Q[item] model.update(P_grad / len(user_batch), Q_grad / len(user_batch), learning_rate) train_rmse += np.mean(errs ** 2) # 更新进度条 pbar.set_postfix({ &#x27;Epoch&#x27;: epoch, &#x27;Train RMSE&#x27;: f&#x27;{np.sqrt(train_rmse / (i + 1)):.4f}&#x27;, &#x27;Test RMSE&#x27;: f&#x27;{test_losses[-1]:.4f}&#x27; if test_losses else None }) pbar.update(1) # 计算 RMSE 损失 train_rmse = np.sqrt(train_rmse / len(user_train)) train_losses.append(train_rmse) y_test_pred = model.pred(user_test, item_test) test_rmse = np.sqrt(np.mean((y_test - y_test_pred) ** 2)) test_losses.append(test_rmse) return train_losses, test_losses</code></pre></div></div><p>  最后,我们定义超参数,并实现MF模型的训练部分,并将损失随训练的变化曲线绘制出来。</p><div class="rno-markdown-code"><div class="rno-markdown-code-toolbar"><div class="rno-markdown-code-toolbar-info"><div class="rno-markdown-code-toolbar-item is-type"><span class="is-m-hidden">代码语言:</span>javascript</div><div class="rno-markdown-code-toolbar-item is-num"><i class="icon-code"></i><span class="is-m-hidden">代码</span>运行次数:<!-- -->0</div></div><div class="rno-markdown-code-toolbar-opt"><div class="rno-markdown-code-toolbar-copy"><i class="icon-copy"></i><span class="is-m-hidden">复制</span></div><button class="rno-markdown-code-toolbar-run"><i class="icon-run"></i><span class="is-m-hidden">Cloud Studio</span> 代码运行</button></div></div><div class="developer-code-block"><pre class="prism-token token line-numbers language-javascript"><code class="language-javascript" style="margin-left:0"># 超参数 feature_num = 16 # 特征数 learning_rate = 0.1 # 学习率 lbd = 1e-4 # 正则化强度 max_training_step = 30 batch_size = 64 # 批量大小 # 建立模型 model = MF(user_num, item_num, feature_num) # 训练部分 train_losses, test_losses = train(model, learning_rate, lbd, max_training_step, batch_size) plt.figure() x = np.arange(max_training_step) + 1 plt.plot(x, train_losses, color=&#x27;blue&#x27;, label=&#x27;train loss&#x27;) plt.plot(x, test_losses, color=&#x27;red&#x27;, ls=&#x27;--&#x27;, label=&#x27;test loss&#x27;) plt.xlabel(&#x27;Epoch&#x27;) plt.ylabel(&#x27;RMSE&#x27;) plt.legend() plt.show()</code></pre></div></div><figure class=""><div class="rno-markdown-img-url" style="text-align:center"><div class="rno-markdown-img-url-inner" style="width:100%"><div style="width:100%"><img src="https://developer.qcloudimg.com/http-save/yehe-11457362/ae6f61935c8d95571420fa5a5ee7e819.png" style="width:100%"/></div></div></div></figure><p>  为了直观地展示模型效果,我们输出一些模型在测试集中的预测结果与真实结果进行对比。上面我们训练得到的模型在测试集上的RMSE大概是1左右,所以这里模型预测的评分与真实评分大致也差1。</p><div class="rno-markdown-code"><div class="rno-markdown-code-toolbar"><div class="rno-markdown-code-toolbar-info"><div class="rno-markdown-code-toolbar-item is-type"><span class="is-m-hidden">代码语言:</span>javascript</div><div class="rno-markdown-code-toolbar-item is-num"><i class="icon-code"></i><span class="is-m-hidden">代码</span>运行次数:<!-- -->0</div></div><div class="rno-markdown-code-toolbar-opt"><div class="rno-markdown-code-toolbar-copy"><i class="icon-copy"></i><span class="is-m-hidden">复制</span></div><button class="rno-markdown-code-toolbar-run"><i class="icon-run"></i><span class="is-m-hidden">Cloud Studio</span> 代码运行</button></div></div><div class="developer-code-block"><pre class="prism-token token line-numbers language-javascript"><code class="language-javascript" style="margin-left:0">y_test_pred = model.pred(user_test, item_test) print(y_test_pred[:10]) # 把张量转换为numpy数组 print(y_test[:10])</code></pre></div></div><figure class=""><div class="rno-markdown-img-url" style="text-align:center"><div class="rno-markdown-img-url-inner" style="width:100%"><div style="width:100%"><img src="https://developer.qcloudimg.com/http-save/yehe-11457362/bd703f2ddab06be1b0b6a6fd37bcd661.png" style="width:100%"/></div></div></div></figure><h4 id="8303" name="%E4%B8%89%E3%80%81%E5%9B%A0%E5%AD%90%E5%88%86%E8%A7%A3%E6%9C%BA">三、因子分解机</h4><p>  本节我们介绍推荐系统中用户行为预估的另一个常用模型:因子分解机(factorization machines,FM)。FM的应用场景与MF有一些区别,MF的目标是从交互的结果中计算出用户和物品的特征;而FM则正好相反,希望通过物品的特征和某个用户点击这些物品的历史记录,预测该用户点击其他物品的概率,即点击率(click through rate,CTR)。由于被点击和未被点击是一个二分类问题,CTR预估可以用逻辑斯谛回归模型来解决。在逻辑斯谛回归中,线性预测子</p><figure class=""><span>\boldsymbol\theta^\mathrm{T} \boldsymbol x</span></figure><p>为数据中的每一个特征</p><figure class=""><span>x_i</span></figure><p>赋予权重</p><figure class=""><span>\theta_i</span></figure><p>,由此来判断数据的分类。然而,这样的线性参数化假设中,输入的不同特征</p><figure class=""><span>x_i</span></figure><p>与</p><figure class=""><span>x_j</span></figure><p>之间并没有运算,相当于假设不同特征之间是独立的。而在现实中,输入数据的不同特征之间有可能存在关联。例如,假设我们将一张照片中包含的物品作为其特征,那么“红灯笼”与“对联”这两个特征就很可能不是独立的,因为它们都是与春节相关联的意象。因此,作为对线性的逻辑斯谛回归模型的改进,我们进一步引入双线性部分,将输入的不同特征之间的联系也考虑进来。改进后的预测函数为</p><figure class=""><span>\hat y(\boldsymbol x) = \theta_0 + \sum_{i=1}^d \theta_i x_i + \sum_{i=1}^{d-1}\sum_{j=i+1}^d w_{ij}x_ix_j</span></figure><p> 其中,</p><figure class=""><span>\theta_0</span></figure><p>是常数项,</p><figure class=""><span>w_{ij}</span></figure><p>是权重。上式的第二项将所有不同特征</p><figure class=""><span>x_i</span></figure><p>与</p><figure class=""><span>x_j</span></figure><p>相乘,从而可以通过权重</p><figure class=""><span>w_{ij}</span></figure><p>调整特征组合</p><figure class=""><span>(i,j)</span></figure><p>对预测结果的影响。将上式改写为向量形式,为:</p><figure class=""><span>\hat y(\boldsymbol x) = \theta_0 + \boldsymbol\theta^\mathrm{T} \boldsymbol x + \frac12 \boldsymbol x^\mathrm{T} \boldsymbol W \boldsymbol x</span></figure><p> 式中,矩阵</p><figure class=""><span>\boldsymbol W</span></figure><p>是对称的,即 </p><figure class=""><span>w_{ij} = w_{ji}</span></figure><p>。此外,由于我们已经考虑了单独特征的影响,所以不需要将特征与其自身进行交叉,引入</p><figure class=""><span>x_i^2</span></figure><p>项,从而</p><figure class=""><span>\boldsymbol W</span></figure><p>的对角线上元素都为</p><figure class=""><span>0</span></figure><p>。大家可以自行验证,形如 </p><figure class=""><span>f(\boldsymbol x, \boldsymbol y) = \boldsymbol x^\mathrm{T} \boldsymbol A \boldsymbol y</span></figure><p> 的函数是双线性函数。双线性模型由于考虑了不同特征之间的关系,理论上比线性模型要更准确。然而,在实际应用中,该方法面临着稀疏特征的挑战。</p><p>  在用向量表示某一事物的离散特征时,一种常用的方法是独热编码(one-hot encoding)。这一方法中,向量的每一维都对应特征的一种取值,样本所具有的特征所在的维度值为1,其他维度为0。如图3所示,某物品的产地是北京、上海、广州、深圳其中之一,为了表示该物品的产地,我们将其编码为4维向量,4个维度依次对应产地北京、上海、广州、深圳。当物品产地为北京时,其特征向量就是</p><figure class=""><span>(1,0,0,0)</span></figure><p>;物品产地为上海时,其特征向量就是</p><figure class=""><span>(0,1,0,0)</span></figure><p>。如果物品有多个特征,就把每个特征编码成的向量依次拼接起来,形成多域独热编码(multi-field one-hot encoding)。假如某种食品产地是上海、生产日期在2月份、食品种类是乳制品,那么它的编码就如图3所示。</p><figure class=""><div class="rno-markdown-img-url" style="text-align:center"><div class="rno-markdown-img-url-inner" style="width:100%"><div style="width:100%"><img src="https://developer.qcloudimg.com/http-save/yehe-11457362/1b9936a32569625ca4ae18f2408dac47.png" style="width:100%"/></div></div></div></figure><p> 图3 多域独热编码示意 </p><p>  像这样的独热特征向量往往维度非常高,但只有少数几个位置是1,其他位置都是0,稀疏程度很高。当我们训练上述的模型时,需要对参数</p><figure class=""><span>w_{ij}</span></figure><p>求导,结果为 </p><figure class=""><span>\displaystyle \frac{\partial \hat y}{\partial w_{ij}} = x_ix_j</span></figure><p>。由于特征向量的稀疏性,大多数情况下都有 </p><figure class=""><span>x_ix_j=0</span></figure><p>,无法对参数</p><figure class=""><span>w_{ij}</span></figure><p>进行更新。为了解决这一问题,Steffen Rendle提出了因子分解机模型。该方法将权重矩阵</p><figure class=""><span>\boldsymbol W</span></figure><p>分解成 </p><figure class=""><span>\boldsymbol W = \boldsymbol V \boldsymbol V^\mathrm{T}</span></figure><p>,其中 </p><figure class=""><span>\boldsymbol V \in \mathbb{R}^{d \times k}</span></figure><p>。根据矩阵分解的相关理论,当</p><figure class=""><span>\boldsymbol W</span></figure><p>满足某些性质且</p><figure class=""><span>k</span></figure><p>足够大时,我们总可以找到分解矩阵</p><figure class=""><span>\boldsymbol V</span></figure><p>。即使条件不满足,我们也可以用近似分解 </p><figure class=""><span>\boldsymbol W \approx \boldsymbol V\boldsymbol V^\mathrm{T}</span></figure><p>来代替。设</p><figure class=""><span>\boldsymbol V</span></figure><p>的行向量是 </p><figure class=""><span>\boldsymbol v_1, \ldots, \boldsymbol v_d</span></figure><p>,也即是对每个特征</p><figure class=""><span>x_i</span></figure><p>配一个</p><figure class=""><span>k</span></figure><p>维实数向量</p><figure class=""><span>\boldsymbol v_i</span></figure><p>,用矩阵乘法直接计算可以得到 </p><figure class=""><span>w_{ij} = \langle \boldsymbol v_i, \boldsymbol v_j \rangle</span></figure><p>,这样模型的预测函数可以写为</p><figure class=""><span>\hat y(\boldsymbol x) = \theta_0 + \boldsymbol\theta^\mathrm{T} \boldsymbol x + \sum_{i=1}^{d-1}\sum_{j=i+1}^d \langle \boldsymbol v_i, \boldsymbol v_j \rangle x_ix_j</span></figure><p> 此时,再对参数</p><figure class=""><span>\boldsymbol v_s</span></figure><p>求梯度的结果为</p><figure class=""><span>\begin{aligned} \nabla_{\boldsymbol v_s} \hat y &amp;= \nabla_{\boldsymbol v_s} \left(\sum_{i=1}^{d-1}\sum_{j=i+1}^d \langle \boldsymbol v_i, \boldsymbol v_j \rangle x_ix_j \right) \\ &amp;= \nabla_{\boldsymbol v_s} \left( \sum_{j=s+1}^d \langle \boldsymbol v_s, \boldsymbol v_j\rangle x_sx_j + \sum_{i=1}^{s-1} \langle \boldsymbol v_i, \boldsymbol v_s \rangle x_ix_s \right) \\ &amp;= x_s \sum_{j=s+1}^d x_j\boldsymbol v_j + x_s \sum_{i=1}^{s-1} x_i \boldsymbol v_i \\ &amp;= x_s \sum_{i=1}^d x_i \boldsymbol v_i - x_s^2 \boldsymbol v_s \end{aligned}</span></figure><p>  上面的计算过程中,为了简洁,我们采用了不太严谨的写法,当 </p><figure class=""><span>s=1</span></figure><p> 或 </p><figure class=""><span>s=d</span></figure><p> 时会出现求和下界大于上界的情况。此时我们规定求和的结果为零。如果要完全展开,只需要做类似于 </p><figure class=""><span>\sum\limits_{j=s+1}^d \langle \boldsymbol v_s, \boldsymbol v_j \rangle x_sx_j</span></figure><p> 变为 </p><figure class=""><span>\sum\limits_{j=s}^d \langle \boldsymbol v_s, \boldsymbol v_j \rangle x_sx_j - \langle \boldsymbol v_s, \boldsymbol v_s \rangle x_s^2</span></figure><p> 的裂项操作即可。从该结果中可以看出,只要 </p><figure class=""><span>x_s \neq 0</span></figure><p>,参数</p><figure class=""><span>\boldsymbol v_s</span></figure><p>的梯度就不为零,可以用梯度相关的算法对其更新。因此,即使特征向量</p><figure class=""><span>\boldsymbol x</span></figure><p>非常稀疏,FM模型也可以正常进行训练。</p><p>  至此,我们的模型还存在一个问题。双线性模型考虑不同特征之间乘积的做法,虽然提升了模型的能力,但也引入了额外的计算开销。对一个样本来说,线性模型需要计算</p><figure class=""><span>\boldsymbol\theta^\mathrm{T} \boldsymbol x</span></figure><p>,时间复杂度为</p><figure class=""><span>O(d)</span></figure><p>;而我们的模型需要计算每一对特征</p><figure class=""><span>(x_i,x_j)</span></figure><p>的乘积,以及参数</p><figure class=""><span>\boldsymbol v_i</span></figure><p>与</p><figure class=""><span>\boldsymbol v_j</span></figure><p>的内积,时间复杂度为</p><figure class=""><span>O(kd^2)</span></figure><p>。上面已经讲过,多热编码的特征向量维度常常特别高,因此这一时间开销是相当巨大的。但是,我们可以对改进后的预测函数 </p><figure class=""><span>\hat y(\boldsymbol x) = \theta_0 + \sum\limits_{i=1}^d \theta_i x_i + \sum\limits_{i=1}^{d-1}\sum\limits_{j=i+1}^d w_{ij}x_ix_j</span></figure><p> 中的最后一项做一些变形,改变计算顺序来降低时间复杂度。变形方式如下:</p><figure class=""><span>\begin{aligned} \sum_{i=1}^{d-1}\sum_{j=i+1}^d \langle \boldsymbol v_i, \boldsymbol v_j \rangle x_ix_j &amp;= \frac{1}{2} \left(\sum_{i=1}^d\sum_{j=1}^d \langle \boldsymbol v_i, \boldsymbol v_j \rangle x_ix_j - \sum_{i=1}^d \langle \boldsymbol v_i, \boldsymbol v_i \rangle x_i^2 \right) \\ &amp;= \frac{1}{2} \left(\sum_{i=1}^d\sum_{j=1}^d \langle x_i \boldsymbol v_{i}, x_j \boldsymbol v_{j} \rangle - \sum_{i=1}^d \langle x_i \boldsymbol v_i, x_i \boldsymbol v_i \rangle \right) \\ &amp;= \frac12 \left\langle \sum_{i=1}^d x_i \boldsymbol v_i, \sum_{j=1}^d x_j \boldsymbol v_j\right\rangle - \frac12 \sum_{i=1}^d \langle x_i \boldsymbol v_i, x_i \boldsymbol v_i \rangle \\ &amp;= \frac12 \sum_{l=1}^k \left(\sum_{i=1}^d v_{il}x_i \right)^2 - \frac12 \sum_{l=1}^k \sum_{i=1}^d v_{il}^2x_i^2 \end{aligned}</span></figure><p>  在变形的第二步和第三步,我们利用了向量内积的双线性性质,将标量</p><figure class=""><span>x_i, x_j</span></figure><p>以及求和都移到内积中去。最后的结果中只含有两重求和,外层为</p><figure class=""><span>k</span></figure><p>次,内层为</p><figure class=""><span>d</span></figure><p>次,因此整体的时间复杂度为</p><figure class=""><span>O(kd)</span></figure><p>。这样,FM的时间复杂度关于特征规模</p><figure class=""><span>d</span></figure><p>的增长从平方变为线性,得到了大幅优化。至此,FM的预测公式为</p><figure class=""><span>\hat y(\boldsymbol x) = \theta_0 + \sum_{i=1}^d \theta_i x_i + \frac12 \sum_{l=1}^k \left(\left(\sum_{i=1}^d v_{il}x_i \right)^2 - \sum_{i=1}^d v_{il}^2 x_i^2 \right)</span></figure><p> 如果要做分类任务,只需要再加上softmax函数即可。</p><p>  在上面的模型中,我们只考虑了两个特征之间的组合,因此该FM也被称为二阶FM。如果进一步考虑多个特征的组合,如</p><figure class=""><span>x_ix_jx_k</span></figure><p>,就可以得到高阶的FM模型。由于高阶FM较为复杂,并且也不再是双线性模型,本文在此略去,如果感兴趣可以自行查阅相关资料。</p><h4 id="8437" name="%E5%9B%9B%E3%80%81%E5%8A%A8%E6%89%8B%E5%AE%9E%E7%8E%B0%E5%9B%A0%E5%AD%90%E5%88%86%E8%A7%A3%E6%9C%BA">四、动手实现因子分解机</h4><p>  下面,我们来动手实现二阶FM模型。本节采用的数据集是为FM制作的示例数据集<code>fm_dataset.csv</code>,包含了某个用户浏览过的商品的特征,以及用户是否点击过这个商品。数据集的每一行包含一个商品,前24列是其特征,最后一列是0或1,分别表示用户没有或有点击该商品。我们的目标是根据输入特征预测用户在测试集上的行为,是一个二分类问题。我们先导入必要的模块和数据集并处理数据,将其划分为训练集和测试集。</p><div class="rno-markdown-code"><div class="rno-markdown-code-toolbar"><div class="rno-markdown-code-toolbar-info"><div class="rno-markdown-code-toolbar-item is-type"><span class="is-m-hidden">代码语言:</span>javascript</div><div class="rno-markdown-code-toolbar-item is-num"><i class="icon-code"></i><span class="is-m-hidden">代码</span>运行次数:<!-- -->0</div></div><div class="rno-markdown-code-toolbar-opt"><div class="rno-markdown-code-toolbar-copy"><i class="icon-copy"></i><span class="is-m-hidden">复制</span></div><button class="rno-markdown-code-toolbar-run"><i class="icon-run"></i><span class="is-m-hidden">Cloud Studio</span> 代码运行</button></div></div><div class="developer-code-block"><pre class="prism-token token line-numbers language-javascript"><code class="language-javascript" style="margin-left:0">import numpy as np import matplotlib.pyplot as plt from sklearn import metrics # sklearn中的评价指标函数库 from tqdm import tqdm # 导入数据集 data = np.loadtxt(&#x27;fm_dataset.csv&#x27;, delimiter=&#x27;,&#x27;) # 划分数据集 np.random.seed(0) ratio = 0.8 split = int(ratio * len(data)) x_train = data[:split, :-1] y_train = data[:split, -1] x_test = data[split:, :-1] y_test = data[split:, -1] # 特征数 feature_num = x_train.shape[1] print(&#x27;训练集大小:&#x27;, len(x_train)) print(&#x27;测试集大小:&#x27;, len(x_test)) print(&#x27;特征数:&#x27;, feature_num)</code></pre></div></div><figure class=""><div class="rno-markdown-img-url" style="text-align:center"><div class="rno-markdown-img-url-inner" style="width:38.79%"><div style="width:100%"><img src="https://developer.qcloudimg.com/http-save/yehe-11457362/ec7274cf206431985810125b8e23e4be.png" style="width:100%"/></div></div></div></figure><p>  然后,我们将FM模型定义成类。与MF相同,我们在类中实现预测和梯度更新方法。</p><div class="rno-markdown-code"><div class="rno-markdown-code-toolbar"><div class="rno-markdown-code-toolbar-info"><div class="rno-markdown-code-toolbar-item is-type"><span class="is-m-hidden">代码语言:</span>javascript</div><div class="rno-markdown-code-toolbar-item is-num"><i class="icon-code"></i><span class="is-m-hidden">代码</span>运行次数:<!-- -->0</div></div><div class="rno-markdown-code-toolbar-opt"><div class="rno-markdown-code-toolbar-copy"><i class="icon-copy"></i><span class="is-m-hidden">复制</span></div><button class="rno-markdown-code-toolbar-run"><i class="icon-run"></i><span class="is-m-hidden">Cloud Studio</span> 代码运行</button></div></div><div class="developer-code-block"><pre class="prism-token token line-numbers language-javascript"><code class="language-javascript" style="margin-left:0">class FM: def __init__(self, feature_num, vector_dim): # vector_dim代表公式中的k,为向量v的维度 self.theta0 = 0.0 # 常数项 self.theta = np.zeros(feature_num) # 线性参数 self.v = np.random.normal(size=(feature_num, vector_dim)) # 双线性参数 self.eps = 1e-6 # 精度参数 def _logistic(self, x): # 工具函数,用于将预测转化为概率 return 1 / (1 + np.exp(-x)) def pred(self, x): # 线性部分 linear_term = self.theta0 + x @ self.theta # 双线性部分 square_of_sum = np.square(x @ self.v) sum_of_square = np.square(x) @ np.square(self.v) # 最终预测 y_pred = self._logistic(linear_term + 0.5 * np.sum(square_of_sum - sum_of_square, axis=1)) # 为了防止后续梯度过大,对预测值进行裁剪,将其限制在某一范围内 y_pred = np.clip(y_pred, self.eps, 1 - self.eps) return y_pred def update(self, grad0, grad_theta, grad_v, lr): self.theta0 -= lr * grad0 self.theta -= lr * grad_theta self.v -= lr * grad_v</code></pre></div></div><p>  对于分类任务,我们仍用MLE作为训练时的损失函数。在测试集上,我们采用AUC作为评价指标。由于我们在<a style="color:#0052D9" class="" href="https://cloud.tencent.com/developer/article/2490776?from_column=20421&amp;from=20421" qct-click="" qct-exposure="" qct-area="链接-逻辑斯谛回归">逻辑斯谛回归</a>中已经动手实现过AUC,简单起见,这里我们就直接使用sklearn中的函数直接计算AUC。我们用SGD进行参数更新,训练完成后,我们把训练过程中的准确率和AUC绘制出来。</p><div class="rno-markdown-code"><div class="rno-markdown-code-toolbar"><div class="rno-markdown-code-toolbar-info"><div class="rno-markdown-code-toolbar-item is-type"><span class="is-m-hidden">代码语言:</span>javascript</div><div class="rno-markdown-code-toolbar-item is-num"><i class="icon-code"></i><span class="is-m-hidden">代码</span>运行次数:<!-- -->0</div></div><div class="rno-markdown-code-toolbar-opt"><div class="rno-markdown-code-toolbar-copy"><i class="icon-copy"></i><span class="is-m-hidden">复制</span></div><button class="rno-markdown-code-toolbar-run"><i class="icon-run"></i><span class="is-m-hidden">Cloud Studio</span> 代码运行</button></div></div><div class="developer-code-block"><pre class="prism-token token line-numbers language-javascript"><code class="language-javascript" style="margin-left:0"># 超参数设置,包括学习率、训练轮数等 vector_dim = 16 learning_rate = 0.01 lbd = 0.05 max_training_step = 200 batch_size = 32 # 初始化模型 np.random.seed(0) model = FM(feature_num, vector_dim) train_acc = [] test_acc = [] train_auc = [] test_auc = [] with tqdm(range(max_training_step)) as pbar: for epoch in pbar: st = 0 while st &lt; len(x_train): ed = min(st + batch_size, len(x_train)) X = x_train[st: ed] Y = y_train[st: ed] st += batch_size # 计算模型预测 y_pred = model.pred(X) # 计算交叉熵损失 cross_entropy = -Y * np.log(y_pred) - (1 - Y) * np.log(1 - y_pred) loss = np.sum(cross_entropy) # 计算损失函数对y的梯度,再根据链式法则得到总梯度 grad_y = (y_pred - Y).reshape(-1, 1) # 计算y对参数的梯度 # 常数项 grad0 = np.sum(grad_y * (1 / len(X) + lbd)) # 线性项 grad_theta = np.sum(grad_y * (X / len(X) + lbd * model.theta), axis=0) # 双线性项 grad_v = np.zeros((feature_num, vector_dim)) for i, x in enumerate(X): # 先计算sum(x_i * v_i) xv = x @ model.v grad_vi = np.zeros((feature_num, vector_dim)) for s in range(feature_num): grad_vi[s] += x[s] * xv - (x[s] ** 2) * model.v[s] grad_v += grad_y[i] * grad_vi grad_v = grad_v / len(X) + lbd * model.v model.update(grad0, grad_theta, grad_v, learning_rate) pbar.set_postfix({ &#x27;训练轮数&#x27;: epoch, &#x27;训练损失&#x27;: f&#x27;{loss:.4f}&#x27;, &#x27;训练集准确率&#x27;: train_acc[-1] if train_acc else None, &#x27;测试集准确率&#x27;: test_acc[-1] if test_acc else None }) # 计算模型预测的准确率和AUC # 预测准确率,阈值设置为0.5 y_train_pred = (model.pred(x_train) &gt;= 0.5) acc = np.mean(y_train_pred == y_train) train_acc.append(acc) auc = metrics.roc_auc_score(y_train, y_train_pred) # sklearn中的AUC函数 train_auc.append(auc) y_test_pred = (model.pred(x_test) &gt;= 0.5) acc = np.mean(y_test_pred == y_test) test_acc.append(acc) auc = metrics.roc_auc_score(y_test, y_test_pred) test_auc.append(auc) print(f&#x27;测试集准确率:{test_acc[-1]},\t测试集AUC:{test_auc[-1]}&#x27;)</code></pre></div></div><figure class=""><div class="rno-markdown-img-url" style="text-align:center"><div class="rno-markdown-img-url-inner" style="width:100%"><div style="width:100%"><img src="https://developer.qcloudimg.com/http-save/yehe-11457362/1b252155ad71fe9dc4617579aa5b1dda.png" style="width:100%"/></div></div></div></figure><p>  最后,我们把训练过程中在训练集和测试集上的精确率和AUC绘制出来,观察训练效果。</p><div class="rno-markdown-code"><div class="rno-markdown-code-toolbar"><div class="rno-markdown-code-toolbar-info"><div class="rno-markdown-code-toolbar-item is-type"><span class="is-m-hidden">代码语言:</span>javascript</div><div class="rno-markdown-code-toolbar-item is-num"><i class="icon-code"></i><span class="is-m-hidden">代码</span>运行次数:<!-- -->0</div></div><div class="rno-markdown-code-toolbar-opt"><div class="rno-markdown-code-toolbar-copy"><i class="icon-copy"></i><span class="is-m-hidden">复制</span></div><button class="rno-markdown-code-toolbar-run"><i class="icon-run"></i><span class="is-m-hidden">Cloud Studio</span> 代码运行</button></div></div><div class="developer-code-block"><pre class="prism-token token line-numbers language-javascript"><code class="language-javascript" style="margin-left:0"># 绘制训练曲线 plt.figure(figsize=(13, 5)) x_plot = np.arange(len(train_acc)) + 1 plt.subplot(121) plt.plot(x_plot, train_acc, color=&#x27;blue&#x27;, label=&#x27;train acc&#x27;) plt.plot(x_plot, test_acc, color=&#x27;red&#x27;, ls=&#x27;--&#x27;, label=&#x27;test acc&#x27;) plt.xlabel(&#x27;Epoch&#x27;) plt.ylabel(&#x27;Accuracy&#x27;) plt.legend() plt.subplot(122) plt.plot(x_plot, train_auc, color=&#x27;blue&#x27;, label=&#x27;train AUC&#x27;) plt.plot(x_plot, test_auc, color=&#x27;red&#x27;, ls=&#x27;--&#x27;, label=&#x27;test AUC&#x27;) plt.xlabel(&#x27;Epoch&#x27;) plt.ylabel(&#x27;AUC&#x27;) plt.legend() plt.show()</code></pre></div></div><figure class=""><div class="rno-markdown-img-url" style="text-align:center"><div class="rno-markdown-img-url-inner" style="width:100%"><div style="width:100%"><img src="https://developer.qcloudimg.com/http-save/yehe-11457362/863daa24535df7f33075f94c952ade01.png" style="width:100%"/></div></div></div></figure><h4 id="8449" name="%E4%BA%94%E3%80%81%E6%8B%93%E5%B1%95%EF%BC%9A%E6%A6%82%E7%8E%87%E7%9F%A9%E9%98%B5%E5%88%86%E8%A7%A3">五、拓展:概率矩阵分解</h4><p> 概率矩阵分解(probabilistic matrix factorization,PMF)是另一种常用的双线性模型。与矩阵分解模型不同,它对用户给电影的评分</p><figure class=""><span>r_{ij}</span></figure><p>的分布进行了先验假设,认为其满足正态分布:</p><figure class=""><span>r_{ij} \sim \mathcal{N}(\boldsymbol p_i^\mathrm{T} \boldsymbol q_j, \sigma^2)</span></figure><p> 其中</p><figure class=""><span>\sigma^2</span></figure><p>是正态分布的方差,与用户和电影无关。注意,</p><figure class=""><span>\boldsymbol p_i</span></figure><p>与</p><figure class=""><span>\boldsymbol q_j</span></figure><p>都是未知的。记 </p><figure class=""><span>I_{ij} = \mathbb{I}(r_{ij} \text{存在})</span></figure><p>,即当用户</p><figure class=""><span>i</span></figure><p>对电影</p><figure class=""><span>j</span></figure><p>打过分时 </p><figure class=""><span>I_{ij}=1</span></figure><p>,否则 </p><figure class=""><span>I_{ij}=0</span></figure><p>。再假设不同的评分采样之间互相独立,那么,我们观测到的</p><figure class=""><span>\boldsymbol R</span></figure><p>出现的概率是</p><figure class=""><span>P(\boldsymbol R | \boldsymbol P, \boldsymbol Q, \sigma) = \prod_{i=1}^N\prod_{j=1}^M p_\mathcal{N}(r_{ij}| \boldsymbol p_i^\mathrm{T} \boldsymbol q_j, \sigma^2)^{I_{ij}}</span></figure><p> 这里,我们用 </p><figure class=""><span>p_\mathcal{N}(x|\mu,\sigma^2)</span></figure><p> 表示正态分布 </p><figure class=""><span>\mathcal{N}(\mu, \sigma^2)</span></figure><p> 的概率密度函数,其完整表达式为 </p><figure class=""><span>p_\mathcal{N}(x|\mu,\sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}}\text{e}^{-\frac{(x-\mu)^2}{2\sigma^2}}</span></figure><p> 对于那些空缺的</p><figure class=""><span>r_{ij}</span></figure><p>,由于 </p><figure class=""><span>I_{ij}=0</span></figure><p>,</p><figure class=""><span>p_\mathcal{N}(r_{ij}|\boldsymbol p_i^\mathrm{T} \boldsymbol q_j, \sigma^2)^{I_{ij}}=1</span></figure><p>,对连乘没有贡献,最终的概率只由已知部分计算得出。接下来,我们进一步假设用户的喜好</p><figure class=""><span>\boldsymbol p_i</span></figure><p>和电影的特征</p><figure class=""><span>\boldsymbol q_j</span></figure><p>都满足均值为</p><figure class=""><span>\boldsymbol 0</span></figure><p>的正态分布,协方差矩阵分别为</p><figure class=""><span>\sigma_P^2\boldsymbol I</span></figure><p>和</p><figure class=""><span>\sigma_Q^2 \boldsymbol I</span></figure><p>,即</p><figure class=""><span>P(\boldsymbol P | \sigma_P) = \prod_{i=1}^N p_\mathcal{N}(\boldsymbol p_i| \boldsymbol 0, \sigma_P^2 \boldsymbol I), \quad P(\boldsymbol Q | \sigma_Q) = \prod_{j=1}^M p_\mathcal{N}(\boldsymbol q_j | \boldsymbol 0, \sigma_Q^2 \boldsymbol I)</span></figure><p> 根据全概率公式 </p><figure class=""><span>P(X,Y) = P(X|Y)P(Y)</span></figure><p>,并注意到</p><figure class=""><span>\boldsymbol R</span></figure><p>与</p><figure class=""><span>\sigma_P, \sigma_Q</span></figure><p>无关,我们可以计算出</p><figure class=""><span>\boldsymbol P</span></figure><p>与</p><figure class=""><span>\boldsymbol Q</span></figure><p>的后验概率为</p><figure class=""><span>\small\begin{aligned} P(\boldsymbol P, \boldsymbol Q | \boldsymbol R, \sigma, \sigma_P, \sigma_Q) &amp;= \frac{P(\boldsymbol P, \boldsymbol Q, \boldsymbol R, \sigma, \sigma_P, \sigma_Q)}{P(\boldsymbol R, \sigma, \sigma_P, \sigma_Q)} \\[2ex] &amp;= \frac{P(\boldsymbol R | \boldsymbol P, \boldsymbol Q, \sigma)P(\boldsymbol P, \boldsymbol Q | \sigma_P, \sigma_Q) P(\sigma, \sigma_P, \sigma_Q)}{P(\boldsymbol R, \sigma, \sigma_P, \sigma_Q)} \\[2ex] &amp;= C \cdot P(\boldsymbol R | \boldsymbol P, \boldsymbol Q, \sigma)P(\boldsymbol P|\sigma_P)P(\boldsymbol Q|\sigma_Q) \\ &amp;= C\prod_{i=1}^N\prod_{j=1}^M p_\mathcal{N}(r_{ij}| \boldsymbol p_i^\mathrm{T} \boldsymbol q_j, \sigma^2)^{I_{ij}} \cdot \prod_{i=1}^N p_\mathcal{N}(\boldsymbol p_i| \boldsymbol 0, \sigma_P^2 \boldsymbol I) \cdot \prod_{j=1}^M p_\mathcal{N}(\boldsymbol q_j | \boldsymbol 0, \sigma_Q^2 \boldsymbol I) \end{aligned}</span></figure><p> 其中</p><figure class=""><span>C</span></figure><p>是常数。为了简化这一表达式,我们利用与MLE中相同的技巧,将上式取对数,从而把连乘变为求和:</p><figure class=""><span>\begin{aligned} \log P(\boldsymbol P, \boldsymbol Q | \boldsymbol R, \sigma, \sigma_P, \sigma_Q) &amp;= \sum_{i=1}^N\sum_{j=1}^M I_{ij} \log p_\mathcal{N}(r_{ij} | \boldsymbol p_i^\mathrm{T} \boldsymbol q_j, \sigma^2) + \sum_{i=1}^N \log p_\mathcal{N}(\boldsymbol p_i| \boldsymbol 0, \sigma_P^2 \boldsymbol I) \\ &amp;\quad+ \sum_{j=1}^M \log p_\mathcal{N}(\boldsymbol q_j | \boldsymbol 0, \sigma_Q^2 \boldsymbol I) + \log C \end{aligned}</span></figure><p> 再代入</p><figure class=""><span>p_\mathcal{N}</span></figure><p>取对数后的表达式</p><figure class=""><span>\log p_\mathcal{N}(x|\mu, \sigma^2) = -\frac12 \log (2\pi\sigma^2) - \frac{(x-\mu)^2}{2\sigma^2}</span></figure><p> 计算得到</p><figure class=""><span>\small\begin{aligned} \log P(\boldsymbol P, \boldsymbol Q | \boldsymbol R, \sigma, \sigma_P, \sigma_Q) &amp;= -\frac12 \log(2\pi\sigma^2) \sum_{i=1}^N\sum_{j=1}^M I_{ij} - \frac{1}{2\sigma^2}\sum_{i=1}^N\sum_{j=1}^M I_{ij}(r_{ij} - \boldsymbol p_i^\mathrm{T} \boldsymbol q_j)^2 \\ &amp;\quad-\frac{Nd}{2} \log(2\pi\sigma_P^2) - \frac{1}{2\sigma_P^2}\sum_{i=1}^N \boldsymbol p_i^\mathrm{T} \boldsymbol p_i \\ &amp;\quad-\frac{Md}{2} \log(2\pi\sigma_Q^2) - \frac{1}{2\sigma_Q^2}\sum_{j=1}^M \boldsymbol q_j^\mathrm{T} \boldsymbol q_j + \log C \\ &amp;= -\frac{1}{\sigma^2} \left[\frac12 \sum_{i=1}^N\sum_{j=1}^M I_{ij}(r_{ij} - \boldsymbol p_i^\mathrm{T} \boldsymbol q_j)^2 + \frac{\lambda_P}{2} \lVert \boldsymbol P \lVert_F^2 + \frac{\lambda_Q}{2} \lVert \boldsymbol Q \lVert_F^2 \right] + C_1 \end{aligned}</span></figure><p> 其中,</p><figure class=""><span>\lambda_P = \sigma^2/\sigma_P^2</span></figure><p>,</p><figure class=""><span>\lambda_Q = \sigma^2 / \sigma_Q^2</span></figure><p>,</p><figure class=""><span>C_1</span></figure><p>是与参数</p><figure class=""><span>\boldsymbol P</span></figure><p>和</p><figure class=""><span>\boldsymbol Q</span></figure><p>无关的常数。根据最大似然的思想,我们应当最大化上面计算出的对数概率。因此,定义损失函数为</p><figure class=""><span>J(\boldsymbol P, \boldsymbol Q) = \frac12 \sum_{i=1}^N\sum_{j=1}^M I_{ij}(r_{ij} - \boldsymbol p_i^\mathrm{T} \boldsymbol q_j)^2 + \frac{\lambda_P}{2} \lVert \boldsymbol P \lVert_F^2 + \frac{\lambda_Q}{2} \lVert \boldsymbol Q \lVert_F^2</span></figure><p> 于是,最大化对数概率就等价于最小化损失函数</p><figure class=""><span>J(\boldsymbol P, \boldsymbol Q)</span></figure><p>。并且,这一损失函数恰好为目标值</p><figure class=""><span>r_{ij}</span></figure><p>与参数内积</p><figure class=""><span>\boldsymbol p_i^\mathrm{T} \boldsymbol q_i</span></figure><p>之间的平方损失,再加上</p><figure class=""><span>L_2</span></figure><p>正则化的形式。由于向量内积是双线性函数,PMF模型也属于双线性模型的一种。</p><p>  将损失函数对</p><figure class=""><span>\boldsymbol p_i</span></figure><p>求导,得到</p><figure class=""><span>\nabla_{\boldsymbol p_i} J(\boldsymbol P, \boldsymbol Q) = \sum_{j=1}^M I_{ij}(r_{ij} - \boldsymbol p_i^\mathrm{T} \boldsymbol q_j) \boldsymbol q_j - \lambda_P \boldsymbol p_i</span></figure><p> 令梯度为零,解得</p><figure class=""><span>\boldsymbol p_i = \left(\sum_{j=1}^MI_{ij}\boldsymbol q_j\boldsymbol q_j^\mathrm{T} + \lambda_P \boldsymbol I\right)^{-1} \left(\sum_{j=1}^M I_{ij}r_{ij}\boldsymbol q_j\right)</span></figure><p>  在<a style="color:#0052D9" class="" href="https://cloud.tencent.com/developer/article/2490796?from_column=20421&amp;from=20421" qct-click="" qct-exposure="" qct-area="链接-正则化约束">正则化约束</a>一节中我们讲过,根据矩阵相关的理论,只要</p><figure class=""><span>\lambda_P</span></figure><p>足够大,上式的第一项逆矩阵就总是存在。同理,对</p><figure class=""><span>\boldsymbol q_j</span></figure><p>也有类似的结果。因此,我们可以通过如上形式的</p><figure class=""><span>J(\boldsymbol P, \boldsymbol Q)</span></figure><p>来求解参数</p><figure class=""><span>\boldsymbol P</span></figure><p>与</p><figure class=""><span>\boldsymbol Q</span></figure><p>。在参数的高斯分布假设下,我们自然导出了带有</p><figure class=""><span>L_2</span></figure><p>正则化的MF模型,这并不是偶然。我们会在概率图模型中进一步阐释其中的原理。</p><blockquote><p> <strong>附</strong>:以上文中的数据集及相关资源下载地址: 链接:<a style="color:#0052D9" class="" href="/developer/tools/blog-entry?target=https%3A%2F%2Fpan.quark.cn%2Fs%2F0f31109b2b13&amp;objectId=2490777&amp;objectType=1&amp;isNewArticle=undefined" qct-click="" qct-exposure="" qct-area="链接-https://pan.quark.cn/s/0f31109b2b13">https://pan.quark.cn/s/0f31109b2b13</a> 提取码:gTBK</p></blockquote></div></div></div><div class="mod-content__source"><div class="mod-content__source-inner"><div class="mod-content__source-title">本文参与 <a href="/developer/support-plan" target="_blank">腾讯云自媒体同步曝光计划</a>,分享自作者个人站点/博客。</div><div class="mod-content__source-desc"> 原始发表:2024-08-22,<!-- -->如有侵权请联系 <a href="mailto:cloudcommunity@tencent.com">cloudcommunity@tencent.com</a> 删除</div></div><div class="mod-content__source-btn"><button class="cdc-btn cdc-btn--hole">前往查看</button></div></div><div class="mod-statement-m"><div class="cdc-tag__list mod-content__tags" track-click=""><div class="cdc-tag" track-click="" track-exposure=""><a class="cdc-tag__inner" href="/developer/tag/10719" target="_blank">监督学习</a></div><div class="cdc-tag" track-click="" track-exposure=""><a class="cdc-tag__inner" href="/developer/tag/17290" target="_blank">函数</a></div><div class="cdc-tag" track-click="" track-exposure=""><a class="cdc-tag__inner" href="/developer/tag/17381" target="_blank">模型</a></div><div class="cdc-tag" track-click="" track-exposure=""><a class="cdc-tag__inner" href="/developer/tag/17440" target="_blank">数据</a></div><div class="cdc-tag" track-click="" track-exposure=""><a class="cdc-tag__inner" href="/developer/tag/10149" target="_blank">机器学习</a></div></div><div class="mod-content__statement"><p>本文分享自 <span>作者个人站点/博客</span> <span style="color:#0052d9;cursor:pointer">前往查看</span></p><p>如有侵权,请联系 <a href="mailto:cloudcommunity@tencent.com">cloudcommunity@tencent.com</a> 删除。</p><p class="mod-content__statement-tip">本文参与 <a href="/developer/support-plan" target="_blank">腾讯云自媒体同步曝光计划</a>  ,欢迎热爱写作的你一起参与!</p></div></div><div class="cdc-tag__list mod-content__tags" track-click=""><div class="cdc-tag" track-click="" track-exposure=""><a class="cdc-tag__inner" href="/developer/tag/10719" target="_blank">监督学习</a></div><div class="cdc-tag" track-click="" track-exposure=""><a class="cdc-tag__inner" href="/developer/tag/17290" target="_blank">函数</a></div><div class="cdc-tag" track-click="" track-exposure=""><a class="cdc-tag__inner" href="/developer/tag/17381" target="_blank">模型</a></div><div class="cdc-tag" track-click="" track-exposure=""><a class="cdc-tag__inner" href="/developer/tag/17440" target="_blank">数据</a></div><div class="cdc-tag" track-click="" track-exposure=""><a class="cdc-tag__inner" href="/developer/tag/10149" target="_blank">机器学习</a></div></div></div></div><div class="mod-article-content is-pill-hidden"><div class="mod-comment"><div class="mod-relevant__title">评论</div><div class="cdc-comment-response"><div class="cdc-comment-response-single-edit not-logged"><div class="cdc-comment-response-single-edit__inner"><span class="cdc-avatar cdc-comment-response-single-edit__avatar cdc-comment__avatar circle"><span class="cdc-avatar__inner" style="background-image:url(https://qcloudimg.tencent-cloud.cn/raw/2eca91c9c29816ff056d22815949d83c.png)" target="_blank"></span></span><div class="cdc-comment-response-single-edit__main"><span>登录</span>后参与评论</div></div></div><div class="cdc-comment-response__toolbar"><div class="cdc-comment-response__number">0<!-- --> 条评论</div><div class="cdc-comment-response__segment"><div class="cdc-comment-response__segment-item is-active">热度</div><div class="cdc-comment-response__segment-item">最新</div></div></div><div class="cdc-comment-response-inner"><div class="cdc-comment-response__body"><div><div class="cdc-loading"><div class="cdc-loading__inner"><div class="cdc-loading__item one"></div><div class="cdc-loading__item two"></div><div class="cdc-loading__item three"></div></div></div></div></div></div><div class="cdc-operate-footer"><div class="cdc-operate-footer__inner"><div class="cdc-operate-footer__toggle is-logout"><div class="cdc-operate-footer__toggle-text"><span>登录 </span>后参与评论</div></div></div></div></div></div></div><div class="mod-article-content recommend"><div class="mod-relevant" qct-area="推荐阅读" qct-exposure=""><div class="mod-relevant__title recommend-read">推荐阅读</div><div class="t-divider t-divider--horizontal" style="margin-bottom:0;margin-top:10px"></div></div></div></div><div class="cdc-layout__side"><div class="cdc-personal-info2 mod-author"><div class="cdc-personal-info2__inner"><div class="cdc-personal-info2__detail"><div class="cdc-personal-info2__main"><div class="cdc-personal-info2__name"><a href="/developer/user/11457362" target="_blank" class="cdc-personal-info2__name-text"></a></div><div class="cdc-personal-info2__level"><div class="cdc-personal-info2__level-number">LV.</div><div class="cdc-emblems cdc-personal-info2__level-emblems"></div></div><div class="cdc-personal-info2__position"></div></div><div class="cdc-personal-info2__avatar"></div></div><div class="cdc-personal-info2__list"><a class="cdc-personal-info2__item" href="/developer/user/undefined/articles" target="_blank"><div class="cdc-personal-info2__item-text">文章</div><div class="cdc-personal-info2__item-number">0</div></a><a class="cdc-personal-info2__item" href="/developer/user/undefined" target="_blank"><div class="cdc-personal-info2__item-text">获赞</div><div class="cdc-personal-info2__item-number">0</div></a></div></div></div><div class="mod-sticky-act"><div class="cdc-directory is-just-commercial"><div class="cdc-directory__wrap"><div class="cdc-directory__inner"><div class="cdc-directory__hd">目录</div><div class="cdc-directory__bd"><div class="cdc-directory__bd-box"><ul class="cdc-directory__list level-3"><li class="cdc-directory__item"><span class="cdc-directory__target" id="menu-8131">一、矩阵分解</span></li></ul><ul class="cdc-directory__list level-3"><li class="cdc-directory__item"><span class="cdc-directory__target" id="menu-8242">二、动手实现矩阵分解</span></li></ul><ul class="cdc-directory__list level-3"><li class="cdc-directory__item"><span class="cdc-directory__target" id="menu-8303">三、因子分解机</span></li></ul><ul class="cdc-directory__list level-3"><li class="cdc-directory__item"><span class="cdc-directory__target" id="menu-8437">四、动手实现因子分解机</span></li></ul><ul class="cdc-directory__list level-3"><li class="cdc-directory__item"><span class="cdc-directory__target" id="menu-8449">五、拓展:概率矩阵分解</span></li></ul></div></div></div></div></div><div class="cdc-mod-product2"><div class="cdc-card" qct-exposure="" qct-area="相关产品与服务"><div class="cdc-card__inner"><div class="cdc-card__hd"><div class="cdc-card__title">相关产品与服务</div></div><div class="cdc-card__bd"><div class="cdc-product-info2__list"><div class="cdc-product-info2"><div class="cdc-product-info2__card-main"><div class="cdc-product-info2__card-name">腾讯云 TI 平台</div><div class="cdc-product-info2__card-desc">腾讯云 TI 平台(TencentCloud TI Platform)是基于腾讯先进 AI 能力和多年技术经验,面向开发者、政企提供的全栈式人工智能开发服务平台,致力于打通包含从数据获取、数据处理、算法构建、模型训练、模型评估、模型部署、到 AI 应用开发的产业 + AI 落地全流程链路,帮助用户快速创建和部署 AI 应用,管理全周期 AI 解决方案,从而助力政企单位加速数字化转型并促进 AI 行业生态共建。腾讯云 TI 平台系列产品支持公有云访问、私有化部署以及专属云部署。</div><div class="cdc-product-info2__card-list"><a target="_blank" href="https://cloud.tencent.com/product/ti?from=21341&amp;from_column=21341"><i class="product-icon introduce-icon"></i>产品介绍</a></div></div><div class="cdc-product-info2__activity"><a target="_blank" href="https://cloud.tencent.com/act/pro/Featured?from=21344&amp;from_column=21344"><i class="hot-icon"></i>精选特惠 拼团嗨购</a></div></div></div></div></div></div></div></div></div></div></div></div><div class="cdc-widget-global"><div class="cdc-widget-global__btn announcement"></div><div class="cdc-widget-global__btn code"><div class="cdc-widget-global__btn-tag">领券</div></div><div class="cdc-widget-global__btn top" style="visibility:hidden"></div></div><div class="cdc-footer"><div class="cdc-footer__inner"><div class="cdc-footer__main"><div class="cdc-footer__website"><ul class="cdc-footer__website-group"><li class="cdc-footer__website-column"><div class="cdc-footer__website-box"><h3 class="cdc-footer__website-title">社区</h3><ul class="cdc-footer__website-list"><li class="cdc-footer__website-item"><a href="/developer/column">技术文章</a></li><li class="cdc-footer__website-item"><a href="/developer/ask">技术问答</a></li><li class="cdc-footer__website-item"><a href="/developer/salon">技术沙龙</a></li><li class="cdc-footer__website-item"><a href="/developer/video">技术视频</a></li><li class="cdc-footer__website-item"><a href="/developer/learning">学习中心</a></li><li class="cdc-footer__website-item"><a href="/developer/techpedia">技术百科</a></li><li class="cdc-footer__website-item"><a href="/developer/zone/list">技术专区</a></li></ul></div></li><li class="cdc-footer__website-column"><div class="cdc-footer__website-box"><h3 class="cdc-footer__website-title">活动</h3><ul class="cdc-footer__website-list"><li class="cdc-footer__website-item"><a href="/developer/support-plan">自媒体同步曝光计划</a></li><li class="cdc-footer__website-item"><a href="/developer/support-plan-invitation">邀请作者入驻</a></li><li class="cdc-footer__website-item"><a href="/developer/article/1535830">自荐上首页</a></li><li class="cdc-footer__website-item"><a href="/developer/competition">技术竞赛</a></li></ul></div></li><li class="cdc-footer__website-column"><div class="cdc-footer__website-box"><h3 class="cdc-footer__website-title">资源</h3><ul class="cdc-footer__website-list"><li class="cdc-footer__website-item"><a href="/developer/specials">技术周刊</a></li><li class="cdc-footer__website-item"><a href="/developer/tags">社区标签</a></li><li class="cdc-footer__website-item"><a href="/developer/devdocs">开发者手册</a></li><li class="cdc-footer__website-item"><a href="/lab?from=20064&amp;from_column=20064">开发者实验室</a></li></ul></div></li><li class="cdc-footer__website-column"><div class="cdc-footer__website-box"><h3 class="cdc-footer__website-title">关于</h3><ul class="cdc-footer__website-list"><li class="cdc-footer__website-item"><a rel="nofollow" href="/developer/article/1006434">社区规范</a></li><li class="cdc-footer__website-item"><a rel="nofollow" href="/developer/article/1006435">免责声明</a></li><li class="cdc-footer__website-item"><a rel="nofollow" href="mailto:cloudcommunity@tencent.com">联系我们</a></li><li class="cdc-footer__website-item"><a rel="nofollow" href="/developer/friendlink">友情链接</a></li></ul></div></li></ul></div><div class="cdc-footer__qr"><h3 class="cdc-footer__qr-title">腾讯云开发者</h3><div class="cdc-footer__qr-object"><img src="https://qcloudimg.tencent-cloud.cn/raw/a8907230cd5be483497c7e90b061b861.png?imageView2/2/w/76" class="cdc-footer__qr-image" alt="扫码关注腾讯云开发者"/></div><div class="cdc-footer__qr-infos"><p class="cdc-footer__qr-info"><span class="cdc-footer__qr-text">扫码关注腾讯云开发者</span></p><p class="cdc-footer__qr-info"><span class="cdc-footer__qr-text">领取腾讯云代金券</span></p></div></div></div><div class="cdc-footer__recommend"><div class="cdc-footer__recommend-rows"><div class="cdc-footer__recommend-cell"><h3 class="cdc-footer__recommend-title">热门产品</h3><div class="cdc-footer__recommend-wrap"><ul class="cdc-footer__recommend-list"><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="https://dnspod.cloud.tencent.com?from=20064&amp;from_column=20064">域名注册</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/cvm?from=20064&amp;from_column=20064">云服务器</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/tbaas?from=20064&amp;from_column=20064">区块链服务</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/mq?from=20064&amp;from_column=20064">消息队列</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/dsa?from=20064&amp;from_column=20064">网络加速</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/tencentdb-catalog?from=20064&amp;from_column=20064">云数据库</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/cns?from=20064&amp;from_column=20064">域名解析</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/cos?from=20064&amp;from_column=20064">云存储</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/css?from=20064&amp;from_column=20064">视频直播</a></li></ul></div></div><div class="cdc-footer__recommend-cell"><h3 class="cdc-footer__recommend-title">热门推荐</h3><div class="cdc-footer__recommend-wrap"><ul class="cdc-footer__recommend-list"><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/facerecognition?from=20064&amp;from_column=20064">人脸识别</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/tm?from=20064&amp;from_column=20064">腾讯会议</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/act/pro/enterprise2019?from=20064&amp;from_column=20064">企业云</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/cdn-scd?from=20064&amp;from_column=20064">CDN加速</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/trtc?from=20064&amp;from_column=20064">视频通话</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/tiia?from=20064&amp;from_column=20064">图像分析</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/cdb?from=20064&amp;from_column=20064">MySQL 数据库</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/symantecssl?from=20064&amp;from_column=20064">SSL 证书</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/asr?from=20064&amp;from_column=20064">语音识别</a></li></ul></div></div><div class="cdc-footer__recommend-cell"><h3 class="cdc-footer__recommend-title">更多推荐</h3><div class="cdc-footer__recommend-wrap"><ul class="cdc-footer__recommend-list"><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/solution/data_protection?from=20064&amp;from_column=20064">数据安全</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/clb?from=20064&amp;from_column=20064">负载均衡</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/sms?from=20064&amp;from_column=20064">短信</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/ocr?from=20064&amp;from_column=20064">文字识别</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/vod?from=20064&amp;from_column=20064">云点播</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="https://tm.cloud.tencent.com?from=20064&amp;from_column=20064">商标注册</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/solution/la?from=20064&amp;from_column=20064">小程序开发</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/cat?from=20064&amp;from_column=20064">网站监控</a></li><li class="cdc-footer__recommend-item"><a class="com-2-footer-recommend-link" href="/product/cdm?from=20064&amp;from_column=20064">数据迁移</a></li></ul></div></div></div></div><div class="cdc-footer__copyright"><div class="cdc-footer__copyright-text"><p>Copyright © 2013 - <!-- -->2025<!-- --> Tencent Cloud. All Rights Reserved. 腾讯云 版权所有 </p><p>深圳市腾讯计算机系统有限公司 ICP备案/许可证号:<a href="https://beian.miit.gov.cn/#/Integrated/index" target="_blank">粤B2-20090059 </a><a href="https://www.beian.gov.cn/portal/index.do" target="_blank">深公网安备号 44030502008569</a></p><p>腾讯云计算(北京)有限责任公司 京ICP证150476号 |  <a href="https://beian.miit.gov.cn/#/Integrated/index" target="_blank">京ICP备11018762号</a> <!-- -->|<!-- --> <a href="https://www.beian.gov.cn/portal/index.do" target="_blank">京公网安备号11010802020287</a></p></div></div></div></div><div style="display:none"><a href="/developer/ask/archives.html">问题归档</a><a href="/developer/column/archives.html">专栏文章</a><a href="/developer/news/archives.html">快讯文章归档</a><a href="/developer/information/all.html">关键词归档</a><a href="/developer/devdocs/archives.html">开发者手册归档</a><a href="/developer/devdocs/sections_p1.html">开发者手册 Section 归档</a></div><div class="cdc-m-footer"><div class="cdc-m-footer__inner"><div class="cdc-m-footer__copyright"><p>Copyright © 2013 - <!-- -->2025<!-- --> Tencent Cloud.</p><p>All Rights Reserved. 腾讯云 版权所有</p></div></div></div><div class="cdc-operate-footer"><div class="cdc-operate-footer__inner"><div class="cdc-operate-footer__toggle is-logout"><div class="cdc-operate-footer__toggle-text"><span>登录 </span>后参与评论</div></div><div class="cdc-operate-footer__operations"><div class="cdc-operate-footer__operate"><i class="cdc-operate-footer__operate-icon comment"></i></div><div class="cdc-operate-footer__operate emoji"><div class="emoji-item"><span class="emoji-item-icon update"></span></div></div><div class="cdc-operate-footer__operate"><i class="cdc-operate-footer__operate-icon book"></i></div><div class="cdc-operate-footer__operate"><i class="cdc-operate-footer__operate-icon menu"></i></div><div class="cdc-operate-footer__operate"><i class="cdc-operate-footer__operate-icon more"></i></div></div></div></div><div class="cdc-suspend-pill"><div class="cdc-suspend-pill__inner"><button class="cdc-icon-btn cdc-suspend-pill__item emoji cdc-icon-btn--text"><div class="emoji-item"><span class="emoji-item-icon update"></span></div><span class="cdc-suspend-pill__item-number">0</span></button><button class="cdc-icon-btn cdc-suspend-pill__item like cdc-icon-btn--text"><span class="cdc-svg-icon-con"><span class="cdc-svg-icon" style="width:24px;height:24px"><svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="currentcolor"><path fill-rule="evenodd" clip-rule="evenodd" d="M17.5 11.25C17.5 11.9404 16.9404 12.5 16.25 12.5C15.5596 12.5 15 11.9404 15 11.25C15 10.5596 15.5596 10 16.25 10C16.9404 10 17.5 10.5596 17.5 11.25Z M12.25 12.5C12.9404 12.5 13.5 11.9404 13.5 11.25C13.5 10.5596 12.9404 10 12.25 10C11.5596 10 11 10.5596 11 11.25C11 11.9404 11.5596 12.5 12.25 12.5Z M8.25 12.5C8.94036 12.5 9.5 11.9404 9.5 11.25C9.5 10.5596 8.94036 10 8.25 10C7.55964 10 7 10.5596 7 11.25C7 11.9404 7.55964 12.5 8.25 12.5Z M5 3C3.34315 3 2 4.34315 2 6V16C2 17.6569 3.34315 19 5 19H8.34311L10.5858 21.2426C11.3668 22.0237 12.6331 22.0237 13.4142 21.2426L15.6568 19H19C20.6569 19 22 17.6569 22 16V6C22 4.34315 20.6569 3 19 3H5ZM4 6C4 5.44772 4.44772 5 5 5H19C19.5523 5 20 5.44772 20 6V16C20 16.5523 19.5523 17 19 17H14.8284L12 19.8284L9.17154 17H5C4.44772 17 4 16.5523 4 16V6Z"></path></svg></span></span><span class="cdc-suspend-pill__item-number">0</span></button><button class="cdc-icon-btn cdc-suspend-pill__item collect cdc-icon-btn--text" qct-area="收藏文章" qct-click=""><span class="cdc-svg-icon-con"><span class="cdc-svg-icon" style="width:24px;height:24px"><svg width="24" height="24" viewBox="0 0 24 24" fill="currentcolor" xmlns="http://www.w3.org/2000/svg"><path fill-rule="evenodd" clip-rule="evenodd" d="M10.2057 3.11487C10.9393 1.62838 13.059 1.62838 13.7927 3.11487L15.9724 7.53141L20.8463 8.23963C22.4867 8.478 23.1418 10.4939 21.9547 11.651L18.4279 15.0888L19.2605 19.9431C19.5407 21.5769 17.8258 22.8228 16.3586 22.0514L11.9992 19.7596L7.63981 22.0514C6.17255 22.8228 4.45769 21.5769 4.73791 19.9431L5.57048 15.0888L2.04366 11.651C0.856629 10.4939 1.51165 8.478 3.15209 8.23963L8.02603 7.53141L10.2057 3.11487ZM11.9992 4L9.8195 8.41654C9.52818 9.00683 8.96504 9.41597 8.31363 9.51062L3.43969 10.2188L6.9665 13.6566C7.43787 14.1161 7.65297 14.7781 7.5417 15.4269L6.70913 20.2812L11.0685 17.9893C11.6512 17.683 12.3472 17.683 12.9299 17.9893L17.2893 20.2812L16.4567 15.4269C16.3454 14.7781 16.5605 14.1161 17.0319 13.6566L20.5587 10.2188L15.6848 9.51062C15.0333 9.41597 14.4702 9.00683 14.1789 8.41654L11.9992 4Z"></path></svg></span></span><span class="cdc-suspend-pill__item-number">0</span></button><button class="cdc-icon-btn cdc-suspend-pill__item cdc-icon-btn--text"><span class="cdc-svg-icon-con"><span class="cdc-svg-icon" style="width:24px;height:24px"><svg width="24" height="24" viewBox="0 0 24 24" fill="currentcolor" xmlns="http://www.w3.org/2000/svg"><path d="M13.0001 4V6H17.5859L10.1787 13.4072L11.6043 14.81L19.0001 7.41424V12H21.0001V4H13.0001Z"></path><path d="M3 12.9996C3 8.71646 5.99202 5.13211 10 4.22266V6.28952C7.10851 7.15007 5 9.82862 5 12.9996C5 16.8656 8.13401 19.9996 12 19.9996C15.1709 19.9996 17.8494 17.8912 18.71 14.9999H20.7769C19.8674 19.0077 16.2831 21.9996 12 21.9996C7.02944 21.9996 3 17.9702 3 12.9996Z"></path></svg></span></span></button><button class="cdc-icon-btn cdc-suspend-pill__item cdc-icon-btn--text"><span class="cdc-svg-icon-con"><span class="cdc-svg-icon" style="width:24px;height:24px"><svg width="24" height="24" viewBox="0 0 24 24" fill="currentcolor" xmlns="http://www.w3.org/2000/svg"><path fill-rule="evenodd" clip-rule="evenodd" d="M2 6C2 4.34315 3.34315 3 5 3H17C18.6569 3 20 4.34315 20 6V11H18V6C18 5.44772 17.5523 5 17 5H5C4.44772 5 4 5.44772 4 6V18C4 18.5523 4.44772 19 5 19H12V21H5C3.34315 21 2 19.6569 2 18V6ZM6 8H12V10H6V8ZM6 12H15V14H6V12ZM22 16H19V13H17V16H14V18H17V21H19V18H22V16Z"></path></svg></span></span></button><div class="cdc-suspend-pill__line"></div><button class="cdc-icon-btn cdc-suspend-pill__item cdc-icon-btn--text"><span class="cdc-svg-icon-con"><span class="cdc-svg-icon" style="width:24px;height:24px"><svg width="24" height="24" viewBox="0 0 24 24" fill="currentcolor" xmlns="http://www.w3.org/2000/svg"><path d="M16.5047 6H13V4H20V10.876H18V7.33313L14.4571 10.876L13.0429 9.46182L16.5047 6Z"></path><path d="M11 6.00006H7.4953L10.9571 9.46189L9.54291 10.8761L6 7.33319V10.8761H4V4.00006H11V6.00006Z"></path><path d="M7.4953 18.8761H11V20.8761H4V14.0001H6V17.543L9.54291 14.0001L10.9571 15.4143L7.4953 18.8761Z"></path><path d="M16.5047 18.8761H13V20.8761H20V14.0001H18V17.543L14.4571 14.0001L13.0429 15.4143L16.5047 18.8761Z"></path></svg></span></span></button><button class="cdc-icon-btn cdc-suspend-pill__item recommend cdc-icon-btn--text" track-click="{&quot;areaId&quot;:106019,&quot;recPolicyId&quot;:1002,&quot;elementId&quot;:2}" track-exposure="{&quot;areaId&quot;:106019,&quot;recPolicyId&quot;:1002,&quot;elementId&quot;:2}"><span class="cdc-svg-icon-con"><span class="cdc-svg-icon" style="width:24px;height:24px"><svg width="24" height="24" viewBox="0 0 24 24" fill="currentcolor" xmlns="http://www.w3.org/2000/svg"><path d="M5 8H10V10H5V8Z"></path><path d="M10 12H5V14H10V12Z"></path><path d="M14 8H19V10H14V8Z"></path><path d="M19 12H14V14H19V12Z"></path><path fill-rule="evenodd" clip-rule="evenodd" d="M11 20.608L9.57047 20.1996C8.83303 19.9889 8.05701 19.9506 7.30243 20.0878L4.35777 20.6232C3.13009 20.8464 2 19.9033 2 18.6555V5.2669C2 4.2325 2.78877 3.36877 3.81893 3.27512L6.52892 3.02875C7.95704 2.89892 9.39058 3.21084 10.6356 3.9223L12 4.70194L13.3644 3.9223C14.6094 3.21084 16.043 2.89892 17.4711 3.02875L20.1811 3.27512C21.2112 3.36877 22 4.2325 22 5.2669V18.6555C22 19.9033 20.8699 20.8464 19.6422 20.6232L16.6976 20.0878C15.943 19.9506 15.167 19.9889 14.4295 20.1996L13 20.608L12.5 20.8535L12 20.8937L11.5 20.8535L11 20.608ZM6.70999 5.02054C7.73007 4.9278 8.75403 5.1506 9.64336 5.65879L11 6.43401V18.528L10.1199 18.2765C9.0875 17.9815 8.00107 17.928 6.94466 18.1201L4 18.6555V5.2669L6.70999 5.02054ZM13 18.528L13.8801 18.2765C14.9125 17.9815 15.9989 17.928 17.0553 18.1201L20 18.6555V5.2669L17.29 5.02054C16.2699 4.9278 15.246 5.1506 14.3566 5.65879L13 6.43401V18.528Z"></path></svg></span></span><span class="cdc-suspend-pill__item-text">推荐</span></button></div></div></div></div></div><script> if (!String.prototype.replaceAll) { String.prototype.replaceAll = function (str, newStr) { // If a regex pattern if (Object.prototype.toString.call(str).toLowerCase() === '[object regexp]') { return this.replace(str, newStr); } // If a string return this.replace(new RegExp(str, 'g'), newStr); }; } </script><script src="https://developer.qcloudimg.com/static/jquery.min.js"></script><script src="https://cloud.tencent.com/qccomponent/login/api.js"></script><script src="https://cloudcache.tencent-cloud.com/qcloud/main/scripts/release/common/vendors/react/react.16.8.6.min.js"></script><script src="https://qccommunity-1258344699.cos.ap-guangzhou.myqcloud.com/tc_player/releasev5.1.0/libs/TXLivePlayer-1.3.5.min.js" defer=""></script><script src="https://qccommunity-1258344699.cos.ap-guangzhou.myqcloud.com/tc_player/releasev5.1.0/libs/hls.min.1.1.7.js"></script><script src="https://qccommunity-1258344699.cos.ap-guangzhou.myqcloud.com/tc_player/releasev5.1.0/tcplayer.v5.1.0.min.js"></script><script id="__NEXT_DATA__" type="application/json">{"props":{"isMobile":false,"isSupportWebp":false,"currentDomain":"cloud.tencent.com","baseUrl":"https://cloud.tencent.com","reqId":"OM6y5UPsk4aKf9zl_jLcL","query":{"articleId":"2490777"},"platform":"other","env":"production","__N_SSP":true,"pageProps":{"fallback":{"#url:\"/api/article/detail\",params:#articleId:2490777,,":{"articleData":{"articleId":2490777,"codeLineNum":265,"readingTime":31659,"wordsNum":145001},"articleInfo":{"articleId":2490777,"channel":2,"commentNum":0,"content":{"blocks":[{"key":"8089","type":"unstyled","text":"机器学习是一门人工智能的分支学科,通过算法和模型让计算机从数据中学习,进行模型训练和优化,做出预测、分类和决策支持。Python成为机器学习的首选语言,依赖于强大的开源库如Scikit-learn、TensorFlow和PyTorch。本专栏介绍机器学习的相关算法以及基于Python的算法实现。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8090","type":"unstyled","text":" 【GitCode】专栏资源保存在我的GitCode仓库:https://gitcode.com/Morse_Chen/Python_machine_learning。","depth":0,"inlineStyleRanges":[],"entityRanges":[{"key":0,"offset":29,"length":54}]},{"key":"8091","type":"unstyled","text":"  从本文开始,我们介绍参数化模型中的非线性模型。在前几篇文章中,我们介绍了线性回归与逻辑斯谛回归模型。这两个模型都有一个共同的特征:包含线性预测因子","depth":0,"inlineStyleRanges":[],"entityRanges":[{"key":1,"offset":38,"length":4},{"key":2,"offset":43,"length":6}]},{"type":"atomic","text":"\\boldsymbol\\theta^\\mathrm{T}\\boldsymbol x","data":{"mathjax":true,"teX":"\\boldsymbol\\theta^\\mathrm{T}\\boldsymbol x"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8093","type":"unstyled","text":"。将该因子看作","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol x","data":{"mathjax":true,"teX":"\\boldsymbol x"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8095","type":"unstyled","text":"的函数,如果输入","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol x","data":{"mathjax":true,"teX":"\\boldsymbol x"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8097","type":"unstyled","text":"变为原来的","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\lambda","data":{"mathjax":true,"teX":"\\lambda"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8099","type":"unstyled","text":"倍,那么输出为 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol\\theta^\\mathrm{T}(\\lambda \\boldsymbol x) = \\lambda \\boldsymbol\\theta^\\mathrm{T} \\boldsymbol x","data":{"mathjax":true,"teX":"\\boldsymbol\\theta^\\mathrm{T}(\\lambda \\boldsymbol x) = \\lambda \\boldsymbol\\theta^\\mathrm{T} \\boldsymbol x"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8101","type":"unstyled","text":",也变成原来的","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\lambda","data":{"mathjax":true,"teX":"\\lambda"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8103","type":"unstyled","text":"倍。在逻辑斯谛回归的扩展阅读中,我们将这一类模型都归为广义线性模型。然而,此类模型所做的线性假设在许多任务上并不适用,我们需要其他参数假设来导出更合适的模型。本文首先讲解在推荐系统领域很常用的双线性模型(bilinear model)。","depth":0,"inlineStyleRanges":[],"entityRanges":[{"key":3,"offset":3,"length":6}]},{"key":"8104","type":"unstyled","text":"  双线性模型虽然名称中包含“线性模型”,但并不属于线性模型或广义线性模型,其正确的理解应当是“双线性”模型。在数学中,双线性的含义为,二元函数固定任意一个自变量时,函数关于另一个自变量线性。具体来说,二元函数 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"f \\colon \\mathbb{R}^n \\times \\mathbb{R}^m \\to \\mathbb{R}^l","data":{"mathjax":true,"teX":"f \\colon \\mathbb{R}^n \\times \\mathbb{R}^m \\to \\mathbb{R}^l"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8106","type":"unstyled","text":" 是双线性函数,当且仅当对任意 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol u, \\boldsymbol v \\in \\mathbb{R}^n, \\boldsymbol s, \\boldsymbol t \\in \\mathbb{R}^m, \\lambda \\in \\mathbb{R}","data":{"mathjax":true,"teX":"\\boldsymbol u, \\boldsymbol v \\in \\mathbb{R}^n, \\boldsymbol s, \\boldsymbol t \\in \\mathbb{R}^m, \\lambda \\in \\mathbb{R}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8108","type":"unstyled","text":" 都有:","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8109","type":"ordered-list-item","text":"​","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"f(\\boldsymbol u, \\boldsymbol s + \\boldsymbol t) = f(\\boldsymbol u, \\boldsymbol s) + f(\\boldsymbol u, \\boldsymbol t)","data":{"mathjax":true,"teX":"f(\\boldsymbol u, \\boldsymbol s + \\boldsymbol t) = f(\\boldsymbol u, \\boldsymbol s) + f(\\boldsymbol u, \\boldsymbol t)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8111","type":"ordered-list-item","text":"​","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"f(\\boldsymbol u, \\lambda \\boldsymbol s) = \\lambda f(\\boldsymbol u, \\boldsymbol s)","data":{"mathjax":true,"teX":"f(\\boldsymbol u, \\lambda \\boldsymbol s) = \\lambda f(\\boldsymbol u, \\boldsymbol s)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8113","type":"ordered-list-item","text":"​","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"f(\\boldsymbol u + \\boldsymbol v, \\boldsymbol s) = f(\\boldsymbol u, \\boldsymbol s) + f(\\boldsymbol v, \\boldsymbol s)","data":{"mathjax":true,"teX":"f(\\boldsymbol u + \\boldsymbol v, \\boldsymbol s) = f(\\boldsymbol u, \\boldsymbol s) + f(\\boldsymbol v, \\boldsymbol s)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8115","type":"ordered-list-item","text":"​","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"f(\\lambda \\boldsymbol u, \\boldsymbol s) = \\lambda f(\\boldsymbol u, \\boldsymbol s)","data":{"mathjax":true,"teX":"f(\\lambda \\boldsymbol u, \\boldsymbol s) = \\lambda f(\\boldsymbol u, \\boldsymbol s)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8117","type":"unstyled","text":"  最简单的双线性函数的例子是向量内积 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\langle \\cdot, \\cdot \\rangle","data":{"mathjax":true,"teX":"\\langle \\cdot, \\cdot \\rangle"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8119","type":"unstyled","text":",我们按定义验证前两条性质:","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8120","type":"unordered-list-item","text":"​","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\small\\langle \\boldsymbol u, \\boldsymbol s + \\boldsymbol t \\rangle = \\sum_i u_i(s_i+t_i) = \\sum_i(u_is_i + u_it_i) = \\sum_i u_is_i + \\sum_i u_it_i = \\langle \\boldsymbol u,\\boldsymbol s \\rangle + \\langle \\boldsymbol u, \\boldsymbol t\\rangle","data":{"mathjax":true,"teX":"\\small\\langle \\boldsymbol u, \\boldsymbol s + \\boldsymbol t \\rangle = \\sum_i u_i(s_i+t_i) = \\sum_i(u_is_i + u_it_i) = \\sum_i u_is_i + \\sum_i u_it_i = \\langle \\boldsymbol u,\\boldsymbol s \\rangle + \\langle \\boldsymbol u, \\boldsymbol t\\rangle"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8122","type":"unordered-list-item","text":"​","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\small\\langle \\boldsymbol u, \\lambda \\boldsymbol s \\rangle = \\sum_i u_i(\\lambda s_i) = \\lambda \\sum_i u_is_i = \\lambda \\langle \\boldsymbol u, \\boldsymbol s \\rangle","data":{"mathjax":true,"teX":"\\small\\langle \\boldsymbol u, \\lambda \\boldsymbol s \\rangle = \\sum_i u_i(\\lambda s_i) = \\lambda \\sum_i u_is_i = \\lambda \\langle \\boldsymbol u, \\boldsymbol s \\rangle"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8124","type":"unstyled","text":"后两条性质由对称性,显然也是成立的。而向量的加法就不是双线性函数。虽然加法满足第1、3条性质,但对第2条,如果 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol u \\neq \\boldsymbol 0","data":{"mathjax":true,"teX":"\\boldsymbol u \\neq \\boldsymbol 0"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8126","type":"unstyled","text":" 且 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\lambda\\neq 1","data":{"mathjax":true,"teX":"\\lambda\\neq 1"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8128","type":"unstyled","text":",则有","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol u + \\lambda \\boldsymbol s \\neq \\lambda (\\boldsymbol u + \\boldsymbol s)","data":{"mathjax":true,"teX":"\\boldsymbol u + \\lambda \\boldsymbol s \\neq \\lambda (\\boldsymbol u + \\boldsymbol s)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8130","type":"unstyled","text":"  与线性模型类似,双线性模型并非指模型整体具有双线性性质,而是指其包含双线性因子。该特性赋予模型拟合一些非线性数据模式的能力,从而得到更加精准预测性能。接下来,我们以推荐系统场景为例,介绍两个基础的双线性模型:矩阵分解模型和因子分解机。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8131","type":"header-three","text":"一、矩阵分解","depth":0,"inlineStyleRanges":[],"entityRanges":[],"data":{"text":"%E4%B8%80%E3%80%81%E7%9F%A9%E9%98%B5%E5%88%86%E8%A7%A3"}},{"key":"8132","type":"unstyled","text":" 矩阵分解(matrix factorization,MF)是推荐系统中评分预测(rating prediction)的常用模型,其任务为根据用户和商品已有的评分来预测用户对其他商品的评分。为了更清晰地解释MF模型的任务场景,我们以用户对电影的评分为例进行详细说明。如图1所示,设想有","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"N","data":{"mathjax":true,"teX":"N"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8134","type":"unstyled","text":"个用户和","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"M","data":{"mathjax":true,"teX":"M"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8136","type":"unstyled","text":"部电影,每个用户对一些电影按自己的喜好给出了评分。现在,我们的目标是需要为用户从他没有看过的电影中,向他推荐几部他最有可能喜欢看的电影。理想情况下,如果这个用户对所有电影都给出了评分,那么这个任务就变为从已有评分的电影中进行推荐——直接按照用户打分的高低排序。但实际情况下,在浩如烟海的电影中,用户一般只对很小一部分电影做了评价。因此,我们需要从用户已经做出的评价中推测用户为其他电影的打分,再将电影按推测的打分排序,从中选出最高的几部推荐给该用户。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8137","type":"atomic","text":"图片","depth":0,"inlineStyleRanges":[],"entityRanges":[{"key":4,"offset":0,"length":4}]},{"key":"8138","type":"unstyled","text":" 图1 用户对电影的评分矩阵 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8139","type":"unstyled","text":"  我们继续从生活经验出发来思考这一问题。假设某用户为一部电影打了高分,那么可以合理猜测,该用户喜欢这部电影的某些特征。例如,电影的类型是悬疑、爱情、战争或是其他种类;演员、导演和出品方分别是哪些;叙述的故事发生在什么年代;时长是多少,等等。假如我们有一个电影特征库,可以将每部电影用一个特征向量表示。向量的每一维代表一种特征,值代表电影具有这一特征的程度。同时,我们还可以构建一个用户画像库,包含每个用户更偏好哪些类型的特征,以及偏好的程度。假设特征的个数是","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"d","data":{"mathjax":true,"teX":"d"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8141","type":"unstyled","text":",那么所有电影的特征构成的矩阵是 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol P \\in \\mathbb{R}^{M \\times d}","data":{"mathjax":true,"teX":"\\boldsymbol P \\in \\mathbb{R}^{M \\times d}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8143","type":"unstyled","text":",用户喜好构成的矩阵是 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol Q \\in \\mathbb{R}^{N \\times d}","data":{"mathjax":true,"teX":"\\boldsymbol Q \\in \\mathbb{R}^{N \\times d}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8145","type":"unstyled","text":"。图2给出了两个矩阵的示例。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8146","type":"atomic","text":"图片","depth":0,"inlineStyleRanges":[],"entityRanges":[{"key":5,"offset":0,"length":4}]},{"key":"8147","type":"unstyled","text":" 图2 电影和用户的隐变量矩阵 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8148","type":"unstyled","text":"  需要说明的是,我们实际上分解出的矩阵只是某种交互结果背后的隐变量,并不一定对应真实的特征。这样,我们就把一个用户与电影交互的矩阵拆分成了用户、电影两个矩阵,并且这两个矩阵中包含了更多的信息。最后,用这两个矩阵的乘积 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol R = \\boldsymbol P^\\mathrm{T} \\boldsymbol Q","data":{"mathjax":true,"teX":"\\boldsymbol R = \\boldsymbol P^\\mathrm{T} \\boldsymbol Q"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8150","type":"unstyled","text":" 可以还原出用户对电影的评分。即使用户对某部电影并没有打分,我们也能通过矩阵乘积,根据用户喜欢的特征和该电影具有的特征,预测出用户对电影的喜好程度。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8151","type":"blockquote","text":" 小故事\n   矩阵分解和下面要介绍的因子分解机都属于推荐系统(recommender system)领域的算法。我们在日常使用软件、浏览网站的时候,软件或网站会记录下来我们感兴趣的内容,并在接下来更多地为我们推送同类型的内容。例如,如果我们在购物网站上浏览过牙刷,它就可能再给我们推荐牙刷、毛巾、脸盆等等相关性比较大的商品,这就是推荐系统的作用。推荐系统希望根据用户的特征、商品的特征、用户和商品的交互历史,为用户做出更符合个人喜好的个性化推荐,提高用户的浏览体验,同时为公司带来更高的经济效益。\n   机器学习界开始大量关注推荐系统任务源自美国奈飞电影公司(Netflix)于2006年举办的世界范围的推荐系统算法大赛。该比赛旨在探寻一种算法能更加精确地预测48万名用户对1.7万部电影的打分,如果某个参赛队伍给出的评分预测精度超过了基线算法10%,就可以获得100万美元的奖金。该竞赛在1年来吸引了来自全球186个国家的超过4万支队伍的参加,经过3年的“马拉松”竞赛,最终由一支名为BellKor’s Pragmatic Chaos的联合团队摘得桂冠。而团队中时任雅虎研究员的耶胡达·科伦(Yehuda Koren)则在后来成为了推荐系统领域最为著名的科学家之一,他使用的基于矩阵分解的双线性模型则成为了那个时代推荐系统的主流模型。\n ","depth":0,"inlineStyleRanges":[{"offset":1,"length":3,"style":"BOLD"}],"entityRanges":[]},{"key":"8152","type":"unstyled","text":"  实际上,我们通常能获取到的并不是","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol P","data":{"mathjax":true,"teX":"\\boldsymbol P"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8154","type":"unstyled","text":"和","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol Q","data":{"mathjax":true,"teX":"\\boldsymbol Q"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8156","type":"unstyled","text":",而是打分的结果","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol R","data":{"mathjax":true,"teX":"\\boldsymbol R"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8158","type":"unstyled","text":"。并且由于一个用户只会对极其有限的一部分电影打分,矩阵","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol R","data":{"mathjax":true,"teX":"\\boldsymbol R"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8160","type":"unstyled","text":"是非常稀疏的,绝大多数元素都是空白。因此,我们需要从","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol R","data":{"mathjax":true,"teX":"\\boldsymbol R"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8162","type":"unstyled","text":"有限的元素中推测出用户的喜好","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol P","data":{"mathjax":true,"teX":"\\boldsymbol P"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8164","type":"unstyled","text":"和电影的特征","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol Q","data":{"mathjax":true,"teX":"\\boldsymbol Q"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8166","type":"unstyled","text":"。MF模型利用矩阵分解的技巧完成了这一任务。设第","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"i","data":{"mathjax":true,"teX":"i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8168","type":"unstyled","text":"个用户的偏好向量是","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol p_i","data":{"mathjax":true,"teX":"\\boldsymbol p_i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8170","type":"unstyled","text":",第","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"j","data":{"mathjax":true,"teX":"j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8172","type":"unstyled","text":"部电影的特征向量是","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol q_j","data":{"mathjax":true,"teX":"\\boldsymbol q_j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8174","type":"unstyled","text":",其维度都是特征数","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"d","data":{"mathjax":true,"teX":"d"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8176","type":"unstyled","text":"。MF假设用户","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"i","data":{"mathjax":true,"teX":"i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8178","type":"unstyled","text":"对电影","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"j","data":{"mathjax":true,"teX":"j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8180","type":"unstyled","text":"的评分","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"r_{ij}","data":{"mathjax":true,"teX":"r_{ij}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8182","type":"unstyled","text":"是用户偏好与电影特征的内积,即 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"r_{ij} = \\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j","data":{"mathjax":true,"teX":"r_{ij} = \\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8184","type":"unstyled","text":"。在本文开始已经讲过,向量内积是双线性函数,这也是MF模型属于双线性模型的原因。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8185","type":"unstyled","text":"  既然MF的目标是通过特征还原评分矩阵","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol R","data":{"mathjax":true,"teX":"\\boldsymbol R"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8187","type":"unstyled","text":",我们就以还原结果和","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol R","data":{"mathjax":true,"teX":"\\boldsymbol R"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8189","type":"unstyled","text":"中已知部分的差距作为损失函数。记 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"I_{ij} = \\mathbb{I}(r_{ij}\\text{存在})","data":{"mathjax":true,"teX":"I_{ij} = \\mathbb{I}(r_{ij}\\text{存在})"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8191","type":"unstyled","text":",即当用户为电影打过分时","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"I_{ij}","data":{"mathjax":true,"teX":"I_{ij}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8193","type":"unstyled","text":"为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"1","data":{"mathjax":true,"teX":"1"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8195","type":"unstyled","text":",否则为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"0","data":{"mathjax":true,"teX":"0"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8197","type":"unstyled","text":"。那么损失函数可以写为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"J(\\boldsymbol P, \\boldsymbol Q) = \\sum_{i=1}^N\\sum_{j=1}^M I_{ij}\\mathcal{L}(\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j, r_{ij})","data":{"mathjax":true,"teX":"J(\\boldsymbol P, \\boldsymbol Q) = \\sum_{i=1}^N\\sum_{j=1}^M I_{ij}\\mathcal{L}(\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j, r_{ij})"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8199","type":"unstyled","text":" 式中,","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\mathcal{L}(\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j, r_{ij})","data":{"mathjax":true,"teX":"\\mathcal{L}(\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j, r_{ij})"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8201","type":"unstyled","text":" 是模型预测和真实值之间的损失。一般情况下,我们就选用最简单的MSE作为损失,那么优化目标为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\min_{\\boldsymbol P, \\boldsymbol Q} J(\\boldsymbol P, \\boldsymbol Q) = \\frac12\\sum_{i=1}^N\\sum_{j=1}^M I_{ij} (\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j - r_{ij})^2","data":{"mathjax":true,"teX":"\\min_{\\boldsymbol P, \\boldsymbol Q} J(\\boldsymbol P, \\boldsymbol Q) = \\frac12\\sum_{i=1}^N\\sum_{j=1}^M I_{ij} (\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j - r_{ij})^2"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8203","type":"unstyled","text":" 再加入对","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol P","data":{"mathjax":true,"teX":"\\boldsymbol P"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8205","type":"unstyled","text":"和","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol Q","data":{"mathjax":true,"teX":"\\boldsymbol Q"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8207","type":"unstyled","text":"的","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"L_2","data":{"mathjax":true,"teX":"L_2"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8209","type":"unstyled","text":"正则化约束,就得到总的优化目标:","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\min_{\\boldsymbol P, \\boldsymbol Q} J(\\boldsymbol P, \\boldsymbol Q) = \\frac12\\sum_{i=1}^N\\sum_{j=1}^M I_{ij} \\left((\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j - r_{ij})^2 + \\lambda(\\|\\boldsymbol p_i\\|^2 +\\\\|\\boldsymbol q_j\\|^2)\\right)","data":{"mathjax":true,"teX":"\\min_{\\boldsymbol P, \\boldsymbol Q} J(\\boldsymbol P, \\boldsymbol Q) = \\frac12\\sum_{i=1}^N\\sum_{j=1}^M I_{ij} \\left((\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j - r_{ij})^2 + \\lambda(\\|\\boldsymbol p_i\\|^2 +\\\\|\\boldsymbol q_j\\|^2)\\right)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8211","type":"unstyled","text":" 需要注意,这里的","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"L_2","data":{"mathjax":true,"teX":"L_2"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8213","type":"unstyled","text":"约束并非对整个矩阵","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol P","data":{"mathjax":true,"teX":"\\boldsymbol P"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8215","type":"unstyled","text":"或者","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol Q","data":{"mathjax":true,"teX":"\\boldsymbol Q"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8217","type":"unstyled","text":"而言。我们知道,正则化的目的是通过限制参数的规模来约束模型的复杂度,使模型的复杂度与数据中包含的信息相匹配。以用户为例,假设不同用户直接的评分是独立的。如果用户甲给10部电影打了分,用户乙给2部电影打了分,那么数据中关于甲的信息就比乙多。反映到正则化上,对甲的参数的约束强度也应当比乙大。因此,总损失函数中","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol p_i","data":{"mathjax":true,"teX":"\\boldsymbol p_i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8219","type":"unstyled","text":"的正则化系数是","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\frac{\\lambda}{2}\\sum\\limits_{j=1}^M I_{ij}","data":{"mathjax":true,"teX":"\\frac{\\lambda}{2}\\sum\\limits_{j=1}^M I_{ij}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8221","type":"unstyled","text":",在","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\frac{\\lambda}{2}","data":{"mathjax":true,"teX":"\\frac{\\lambda}{2}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8223","type":"unstyled","text":"的基础上又乘以用户","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"i","data":{"mathjax":true,"teX":"i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8225","type":"unstyled","text":"评分的数量。对电影向量","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol q_j","data":{"mathjax":true,"teX":"\\boldsymbol q_j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8227","type":"unstyled","text":"也是同理。上式对","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol p_{ik}","data":{"mathjax":true,"teX":"\\boldsymbol p_{ik}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8229","type":"unstyled","text":"和","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol q_{jk}","data":{"mathjax":true,"teX":"\\boldsymbol q_{jk}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8231","type":"unstyled","text":"的梯度分别为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\begin{aligned} \\nabla_{\\boldsymbol p_{ik}} J(\\boldsymbol P, \\boldsymbol Q) \u0026= I_{ij} \\left((\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j - r_{ij})\\boldsymbol q_{jk} + \\lambda\\boldsymbol p_{ik} \\right) \\\\[1ex] \\nabla_{\\boldsymbol q_{jk}} J(\\boldsymbol P, \\boldsymbol Q) \u0026= I_{ij} \\left((\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j - r_{ij})\\boldsymbol p_{ik} + \\lambda\\boldsymbol q_{jk} \\right) \\end{aligned}","data":{"mathjax":true,"teX":"\\begin{aligned} \\nabla_{\\boldsymbol p_{ik}} J(\\boldsymbol P, \\boldsymbol Q) \u0026= I_{ij} \\left((\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j - r_{ij})\\boldsymbol q_{jk} + \\lambda\\boldsymbol p_{ik} \\right) \\\\[1ex] \\nabla_{\\boldsymbol q_{jk}} J(\\boldsymbol P, \\boldsymbol Q) \u0026= I_{ij} \\left((\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j - r_{ij})\\boldsymbol p_{ik} + \\lambda\\boldsymbol q_{jk} \\right) \\end{aligned}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8233","type":"unstyled","text":"可以发现,上面","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol p_{ik}","data":{"mathjax":true,"teX":"\\boldsymbol p_{ik}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8235","type":"unstyled","text":"梯度中含有","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol q_{jk}","data":{"mathjax":true,"teX":"\\boldsymbol q_{jk}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8237","type":"unstyled","text":",而","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol q_{jk}","data":{"mathjax":true,"teX":"\\boldsymbol q_{jk}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8239","type":"unstyled","text":"的梯度中含有","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol p_{ik}","data":{"mathjax":true,"teX":"\\boldsymbol p_{ik}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8241","type":"unstyled","text":",两者互相包含,这是由双线性函数的性质决定的,也是双线性模型的一个重要特点。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8242","type":"header-three","text":"二、动手实现矩阵分解","depth":0,"inlineStyleRanges":[],"entityRanges":[],"data":{"text":"%E4%BA%8C%E3%80%81%E5%8A%A8%E6%89%8B%E5%AE%9E%E7%8E%B0%E7%9F%A9%E9%98%B5%E5%88%86%E8%A7%A3"}},{"key":"8243","type":"unstyled","text":"  下面,我们来动手实现矩阵分解模型。我们选用的数据集是推荐系统中的常用数据集MovieLens,其包含从电影评价网站MovieLens中收集的真实用户对电影的打分信息。简单起见,我们采用其包含来自943个用户对1682部电影的10万条样本的版本MovieLens-100k。我们对原始的数据进行了一些处理,现在数据集的每一行有3个数,依次表示用户编号","depth":0,"inlineStyleRanges":[],"entityRanges":[{"key":6,"offset":59,"length":9}]},{"type":"atomic","text":"i","data":{"mathjax":true,"teX":"i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8245","type":"unstyled","text":"、电影编号","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"j","data":{"mathjax":true,"teX":"j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8247","type":"unstyled","text":"、用户对电影的打分","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"r_{ij}","data":{"mathjax":true,"teX":"r_{ij}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8249","type":"unstyled","text":",其中 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"1\\le r_{ij}\\le5","data":{"mathjax":true,"teX":"1\\le r_{ij}\\le5"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8251","type":"unstyled","text":" 且三者都是整数。表1展示了数据集movielens_100k.csv中的3个样本,大家也可以从网站上下载更大的数据集,测试模型的预测效果。","depth":0,"inlineStyleRanges":[{"offset":17,"length":18,"style":"CODE"}],"entityRanges":[]},{"key":"8252","type":"unstyled","text":" 表1 MovieLens-100k数据集示例 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"table","data":{"aligns":[{"key":"8254","align":"left"},{"key":"8255","align":"left"},{"key":"8256","align":"left"}],"rows":[{"key":"8257","cells":[{"key":"8258","raw":{"blocks":[{"key":"8259","type":"unstyled","text":"用户编号","depth":0,"inlineStyleRanges":[],"entityRanges":[]}],"entityMap":{}}},{"key":"8260","raw":{"blocks":[{"key":"8261","type":"unstyled","text":"电影编号","depth":0,"inlineStyleRanges":[],"entityRanges":[]}],"entityMap":{}}},{"key":"8262","raw":{"blocks":[{"key":"8263","type":"unstyled","text":"评分","depth":0,"inlineStyleRanges":[],"entityRanges":[]}],"entityMap":{}}}]},{"key":"8264","cells":[{"key":"8265","raw":{"blocks":[{"key":"8266","type":"unstyled","text":"196","depth":0,"inlineStyleRanges":[],"entityRanges":[]}],"entityMap":{}}},{"key":"8267","raw":{"blocks":[{"key":"8268","type":"unstyled","text":"242","depth":0,"inlineStyleRanges":[],"entityRanges":[]}],"entityMap":{}}},{"key":"8269","raw":{"blocks":[{"key":"8270","type":"unstyled","text":"3","depth":0,"inlineStyleRanges":[],"entityRanges":[]}],"entityMap":{}}}]},{"key":"8271","cells":[{"key":"8272","raw":{"blocks":[{"key":"8273","type":"unstyled","text":"186","depth":0,"inlineStyleRanges":[],"entityRanges":[]}],"entityMap":{}}},{"key":"8274","raw":{"blocks":[{"key":"8275","type":"unstyled","text":"302","depth":0,"inlineStyleRanges":[],"entityRanges":[]}],"entityMap":{}}},{"key":"8276","raw":{"blocks":[{"key":"8277","type":"unstyled","text":"3","depth":0,"inlineStyleRanges":[],"entityRanges":[]}],"entityMap":{}}}]},{"key":"8278","cells":[{"key":"8279","raw":{"blocks":[{"key":"8280","type":"unstyled","text":"22","depth":0,"inlineStyleRanges":[],"entityRanges":[]}],"entityMap":{}}},{"key":"8281","raw":{"blocks":[{"key":"8282","type":"unstyled","text":"377","depth":0,"inlineStyleRanges":[],"entityRanges":[]}],"entityMap":{}}},{"key":"8283","raw":{"blocks":[{"key":"8284","type":"unstyled","text":"1","depth":0,"inlineStyleRanges":[],"entityRanges":[]}],"entityMap":{}}}]}]},"text":"[表格]","inlineStyleRanges":[],"entityRanges":[]},{"key":"8285","type":"code-block","text":"!pip install tqdm\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom tqdm import tqdm # 进度条工具\n\ndata = np.loadtxt('movielens_100k.csv', delimiter=',', dtype=int)\nprint('数据集大小:', len(data))\n# 用户和电影都是从1开始编号的,我们将其转化为从0开始\ndata[:, :2] = data[:, :2] - 1\n\n# 计算用户和电影数量\nusers = set()\nitems = set()\nfor i, j, k in data:\n users.add(i)\n items.add(j)\nuser_num = len(users)\nitem_num = len(items)\nprint(f'用户数:{user_num},电影数:{item_num}')\n\n# 设置随机种子,划分训练集与测试集\nnp.random.seed(0)\n\nratio = 0.8\nsplit = int(len(data) * ratio)\nnp.random.shuffle(data)\ntrain = data[:split]\ntest = data[split:]\n\n# 统计训练集中每个用户和电影出现的数量,作为正则化的权重\nuser_cnt = np.bincount(train[:, 0], minlength=user_num)\nitem_cnt = np.bincount(train[:, 1], minlength=item_num)\nprint(user_cnt[:10])\nprint(item_cnt[:10])\n\n# 用户和电影的编号要作为下标,必须保存为整数\nuser_train, user_test = train[:, 0], test[:, 0]\nitem_train, item_test = train[:, 1], test[:, 1]\ny_train, y_test = train[:, 2], test[:, 2]","depth":0,"inlineStyleRanges":[],"entityRanges":[],"data":{"syntax":"javascript"}},{"key":"8286","type":"atomic","text":"图片","depth":0,"inlineStyleRanges":[],"entityRanges":[{"key":7,"offset":0,"length":4}]},{"key":"8287","type":"unstyled","text":"  然后,我们将MF模型定义成类,在其中实现梯度计算方法。根据上面的推导,模型的参数是用户喜好 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol P \\in \\mathbb{R}^{N\\times d}","data":{"mathjax":true,"teX":"\\boldsymbol P \\in \\mathbb{R}^{N\\times d}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8289","type":"unstyled","text":" 和电影特征 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol Q \\in \\mathbb{R}^{M \\times d}","data":{"mathjax":true,"teX":"\\boldsymbol Q \\in \\mathbb{R}^{M \\times d}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8291","type":"unstyled","text":",其中特征数","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"d","data":{"mathjax":true,"teX":"d"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8293","type":"unstyled","text":"是我们自己指定的超参数。在参数初始化部分,考虑到最终电影的得分都是正数,我们将参数都初始化为1。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8294","type":"code-block","text":"class MF:\n \n def __init__(self, N, M, d):\n # N是用户数量,M是电影数量,d是特征维度\n # 定义模型参数\n self.user_params = np.ones((N, d))\n self.item_params = np.ones((M, d))\n \n def pred(self, user_id, item_id):\n # 预测用户user_id对电影item_id的打分\n # 获得用户偏好和电影特征\n user_param = self.user_params[user_id]\n item_param = self.item_params[item_id]\n # 返回预测的评分\n rating_pred = np.sum(user_param * item_param, axis=1)\n return rating_pred\n \n def update(self, user_grad, item_grad, lr):\n # 根据参数的梯度更新参数\n self.user_params -= lr * user_grad\n self.item_params -= lr * item_grad","depth":0,"inlineStyleRanges":[],"entityRanges":[],"data":{"syntax":"javascript"}},{"key":"8295","type":"unstyled","text":"  下面定义训练函数,以SGD算法对MF模型的参数进行优化。对于回归任务来说,我们仍然以MSE作为损失函数,RMSE作为的评价指标。在训练的同时,我们将其记录下来,供最终绘制训练曲线使用。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8296","type":"code-block","text":"def train(model, learning_rate, lbd, max_training_step, batch_size):\n train_losses = []\n test_losses = []\n batch_num = int(np.ceil(len(user_train) / batch_size))\n with tqdm(range(max_training_step * batch_num)) as pbar:\n for epoch in range(max_training_step):\n # 随机梯度下降\n train_rmse = 0\n for i in range(batch_num):\n # 获取当前批量\n st = i * batch_size\n ed = min(len(user_train), st + batch_size)\n user_batch = user_train[st: ed]\n item_batch = item_train[st: ed]\n y_batch = y_train[st: ed]\n # 计算模型预测\n y_pred = model.pred(user_batch, item_batch)\n # 计算梯度\n P = model.user_params\n Q = model.item_params\n errs = y_batch - y_pred\n P_grad = np.zeros_like(P)\n Q_grad = np.zeros_like(Q)\n for user, item, err in zip(user_batch, item_batch, errs):\n P_grad[user] = P_grad[user] - err * Q[item] + lbd * P[user]\n Q_grad[item] = Q_grad[item] - err * P[user] + lbd * Q[item]\n model.update(P_grad / len(user_batch), Q_grad / len(user_batch), learning_rate)\n \n train_rmse += np.mean(errs ** 2)\n # 更新进度条\n pbar.set_postfix({\n 'Epoch': epoch,\n 'Train RMSE': f'{np.sqrt(train_rmse / (i + 1)):.4f}',\n 'Test RMSE': f'{test_losses[-1]:.4f}' if test_losses else None\n })\n pbar.update(1)\n\n # 计算 RMSE 损失\n train_rmse = np.sqrt(train_rmse / len(user_train))\n train_losses.append(train_rmse)\n y_test_pred = model.pred(user_test, item_test)\n test_rmse = np.sqrt(np.mean((y_test - y_test_pred) ** 2))\n test_losses.append(test_rmse)\n \n return train_losses, test_losses","depth":0,"inlineStyleRanges":[],"entityRanges":[],"data":{"syntax":"javascript"}},{"key":"8297","type":"unstyled","text":"  最后,我们定义超参数,并实现MF模型的训练部分,并将损失随训练的变化曲线绘制出来。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8298","type":"code-block","text":"# 超参数\nfeature_num = 16 # 特征数\nlearning_rate = 0.1 # 学习率\nlbd = 1e-4 # 正则化强度\nmax_training_step = 30\nbatch_size = 64 # 批量大小\n\n# 建立模型\nmodel = MF(user_num, item_num, feature_num)\n# 训练部分\ntrain_losses, test_losses = train(model, learning_rate, lbd, max_training_step, batch_size)\n\nplt.figure()\nx = np.arange(max_training_step) + 1\nplt.plot(x, train_losses, color='blue', label='train loss')\nplt.plot(x, test_losses, color='red', ls='--', label='test loss')\nplt.xlabel('Epoch')\nplt.ylabel('RMSE')\nplt.legend()\nplt.show()","depth":0,"inlineStyleRanges":[],"entityRanges":[],"data":{"syntax":"javascript"}},{"key":"8299","type":"atomic","text":"图片","depth":0,"inlineStyleRanges":[],"entityRanges":[{"key":8,"offset":0,"length":4}]},{"key":"8300","type":"unstyled","text":"  为了直观地展示模型效果,我们输出一些模型在测试集中的预测结果与真实结果进行对比。上面我们训练得到的模型在测试集上的RMSE大概是1左右,所以这里模型预测的评分与真实评分大致也差1。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8301","type":"code-block","text":"y_test_pred = model.pred(user_test, item_test)\nprint(y_test_pred[:10]) # 把张量转换为numpy数组\nprint(y_test[:10])","depth":0,"inlineStyleRanges":[],"entityRanges":[],"data":{"syntax":"javascript"}},{"key":"8302","type":"atomic","text":"图片","depth":0,"inlineStyleRanges":[],"entityRanges":[{"key":9,"offset":0,"length":4}]},{"key":"8303","type":"header-three","text":"三、因子分解机","depth":0,"inlineStyleRanges":[],"entityRanges":[],"data":{"text":"%E4%B8%89%E3%80%81%E5%9B%A0%E5%AD%90%E5%88%86%E8%A7%A3%E6%9C%BA"}},{"key":"8304","type":"unstyled","text":"  本节我们介绍推荐系统中用户行为预估的另一个常用模型:因子分解机(factorization machines,FM)。FM的应用场景与MF有一些区别,MF的目标是从交互的结果中计算出用户和物品的特征;而FM则正好相反,希望通过物品的特征和某个用户点击这些物品的历史记录,预测该用户点击其他物品的概率,即点击率(click through rate,CTR)。由于被点击和未被点击是一个二分类问题,CTR预估可以用逻辑斯谛回归模型来解决。在逻辑斯谛回归中,线性预测子","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol\\theta^\\mathrm{T} \\boldsymbol x","data":{"mathjax":true,"teX":"\\boldsymbol\\theta^\\mathrm{T} \\boldsymbol x"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8306","type":"unstyled","text":"为数据中的每一个特征","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"x_i","data":{"mathjax":true,"teX":"x_i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8308","type":"unstyled","text":"赋予权重","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\theta_i","data":{"mathjax":true,"teX":"\\theta_i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8310","type":"unstyled","text":",由此来判断数据的分类。然而,这样的线性参数化假设中,输入的不同特征","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"x_i","data":{"mathjax":true,"teX":"x_i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8312","type":"unstyled","text":"与","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"x_j","data":{"mathjax":true,"teX":"x_j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8314","type":"unstyled","text":"之间并没有运算,相当于假设不同特征之间是独立的。而在现实中,输入数据的不同特征之间有可能存在关联。例如,假设我们将一张照片中包含的物品作为其特征,那么“红灯笼”与“对联”这两个特征就很可能不是独立的,因为它们都是与春节相关联的意象。因此,作为对线性的逻辑斯谛回归模型的改进,我们进一步引入双线性部分,将输入的不同特征之间的联系也考虑进来。改进后的预测函数为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\hat y(\\boldsymbol x) = \\theta_0 + \\sum_{i=1}^d \\theta_i x_i + \\sum_{i=1}^{d-1}\\sum_{j=i+1}^d w_{ij}x_ix_j","data":{"mathjax":true,"teX":"\\hat y(\\boldsymbol x) = \\theta_0 + \\sum_{i=1}^d \\theta_i x_i + \\sum_{i=1}^{d-1}\\sum_{j=i+1}^d w_{ij}x_ix_j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8316","type":"unstyled","text":" 其中,","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\theta_0","data":{"mathjax":true,"teX":"\\theta_0"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8318","type":"unstyled","text":"是常数项,","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"w_{ij}","data":{"mathjax":true,"teX":"w_{ij}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8320","type":"unstyled","text":"是权重。上式的第二项将所有不同特征","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"x_i","data":{"mathjax":true,"teX":"x_i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8322","type":"unstyled","text":"与","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"x_j","data":{"mathjax":true,"teX":"x_j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8324","type":"unstyled","text":"相乘,从而可以通过权重","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"w_{ij}","data":{"mathjax":true,"teX":"w_{ij}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8326","type":"unstyled","text":"调整特征组合","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"(i,j)","data":{"mathjax":true,"teX":"(i,j)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8328","type":"unstyled","text":"对预测结果的影响。将上式改写为向量形式,为:","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\hat y(\\boldsymbol x) = \\theta_0 + \\boldsymbol\\theta^\\mathrm{T} \\boldsymbol x + \\frac12 \\boldsymbol x^\\mathrm{T} \\boldsymbol W \\boldsymbol x","data":{"mathjax":true,"teX":"\\hat y(\\boldsymbol x) = \\theta_0 + \\boldsymbol\\theta^\\mathrm{T} \\boldsymbol x + \\frac12 \\boldsymbol x^\\mathrm{T} \\boldsymbol W \\boldsymbol x"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8330","type":"unstyled","text":" 式中,矩阵","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol W","data":{"mathjax":true,"teX":"\\boldsymbol W"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8332","type":"unstyled","text":"是对称的,即 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"w_{ij} = w_{ji}","data":{"mathjax":true,"teX":"w_{ij} = w_{ji}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8334","type":"unstyled","text":"。此外,由于我们已经考虑了单独特征的影响,所以不需要将特征与其自身进行交叉,引入","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"x_i^2","data":{"mathjax":true,"teX":"x_i^2"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8336","type":"unstyled","text":"项,从而","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol W","data":{"mathjax":true,"teX":"\\boldsymbol W"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8338","type":"unstyled","text":"的对角线上元素都为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"0","data":{"mathjax":true,"teX":"0"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8340","type":"unstyled","text":"。大家可以自行验证,形如 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"f(\\boldsymbol x, \\boldsymbol y) = \\boldsymbol x^\\mathrm{T} \\boldsymbol A \\boldsymbol y","data":{"mathjax":true,"teX":"f(\\boldsymbol x, \\boldsymbol y) = \\boldsymbol x^\\mathrm{T} \\boldsymbol A \\boldsymbol y"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8342","type":"unstyled","text":" 的函数是双线性函数。双线性模型由于考虑了不同特征之间的关系,理论上比线性模型要更准确。然而,在实际应用中,该方法面临着稀疏特征的挑战。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8343","type":"unstyled","text":"  在用向量表示某一事物的离散特征时,一种常用的方法是独热编码(one-hot encoding)。这一方法中,向量的每一维都对应特征的一种取值,样本所具有的特征所在的维度值为1,其他维度为0。如图3所示,某物品的产地是北京、上海、广州、深圳其中之一,为了表示该物品的产地,我们将其编码为4维向量,4个维度依次对应产地北京、上海、广州、深圳。当物品产地为北京时,其特征向量就是","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"(1,0,0,0)","data":{"mathjax":true,"teX":"(1,0,0,0)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8345","type":"unstyled","text":";物品产地为上海时,其特征向量就是","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"(0,1,0,0)","data":{"mathjax":true,"teX":"(0,1,0,0)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8347","type":"unstyled","text":"。如果物品有多个特征,就把每个特征编码成的向量依次拼接起来,形成多域独热编码(multi-field one-hot encoding)。假如某种食品产地是上海、生产日期在2月份、食品种类是乳制品,那么它的编码就如图3所示。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8348","type":"atomic","text":"图片","depth":0,"inlineStyleRanges":[],"entityRanges":[{"key":10,"offset":0,"length":4}]},{"key":"8349","type":"unstyled","text":" 图3 多域独热编码示意 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8350","type":"unstyled","text":"  像这样的独热特征向量往往维度非常高,但只有少数几个位置是1,其他位置都是0,稀疏程度很高。当我们训练上述的模型时,需要对参数","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"w_{ij}","data":{"mathjax":true,"teX":"w_{ij}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8352","type":"unstyled","text":"求导,结果为 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\displaystyle \\frac{\\partial \\hat y}{\\partial w_{ij}} = x_ix_j","data":{"mathjax":true,"teX":"\\displaystyle \\frac{\\partial \\hat y}{\\partial w_{ij}} = x_ix_j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8354","type":"unstyled","text":"。由于特征向量的稀疏性,大多数情况下都有 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"x_ix_j=0","data":{"mathjax":true,"teX":"x_ix_j=0"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8356","type":"unstyled","text":",无法对参数","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"w_{ij}","data":{"mathjax":true,"teX":"w_{ij}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8358","type":"unstyled","text":"进行更新。为了解决这一问题,Steffen Rendle提出了因子分解机模型。该方法将权重矩阵","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol W","data":{"mathjax":true,"teX":"\\boldsymbol W"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8360","type":"unstyled","text":"分解成 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol W = \\boldsymbol V \\boldsymbol V^\\mathrm{T}","data":{"mathjax":true,"teX":"\\boldsymbol W = \\boldsymbol V \\boldsymbol V^\\mathrm{T}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8362","type":"unstyled","text":",其中 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol V \\in \\mathbb{R}^{d \\times k}","data":{"mathjax":true,"teX":"\\boldsymbol V \\in \\mathbb{R}^{d \\times k}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8364","type":"unstyled","text":"。根据矩阵分解的相关理论,当","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol W","data":{"mathjax":true,"teX":"\\boldsymbol W"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8366","type":"unstyled","text":"满足某些性质且","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"k","data":{"mathjax":true,"teX":"k"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8368","type":"unstyled","text":"足够大时,我们总可以找到分解矩阵","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol V","data":{"mathjax":true,"teX":"\\boldsymbol V"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8370","type":"unstyled","text":"。即使条件不满足,我们也可以用近似分解 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol W \\approx \\boldsymbol V\\boldsymbol V^\\mathrm{T}","data":{"mathjax":true,"teX":"\\boldsymbol W \\approx \\boldsymbol V\\boldsymbol V^\\mathrm{T}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8372","type":"unstyled","text":"来代替。设","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol V","data":{"mathjax":true,"teX":"\\boldsymbol V"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8374","type":"unstyled","text":"的行向量是 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol v_1, \\ldots, \\boldsymbol v_d","data":{"mathjax":true,"teX":"\\boldsymbol v_1, \\ldots, \\boldsymbol v_d"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8376","type":"unstyled","text":",也即是对每个特征","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"x_i","data":{"mathjax":true,"teX":"x_i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8378","type":"unstyled","text":"配一个","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"k","data":{"mathjax":true,"teX":"k"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8380","type":"unstyled","text":"维实数向量","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol v_i","data":{"mathjax":true,"teX":"\\boldsymbol v_i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8382","type":"unstyled","text":",用矩阵乘法直接计算可以得到 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"w_{ij} = \\langle \\boldsymbol v_i, \\boldsymbol v_j \\rangle","data":{"mathjax":true,"teX":"w_{ij} = \\langle \\boldsymbol v_i, \\boldsymbol v_j \\rangle"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8384","type":"unstyled","text":",这样模型的预测函数可以写为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\hat y(\\boldsymbol x) = \\theta_0 + \\boldsymbol\\theta^\\mathrm{T} \\boldsymbol x + \\sum_{i=1}^{d-1}\\sum_{j=i+1}^d \\langle \\boldsymbol v_i, \\boldsymbol v_j \\rangle x_ix_j","data":{"mathjax":true,"teX":"\\hat y(\\boldsymbol x) = \\theta_0 + \\boldsymbol\\theta^\\mathrm{T} \\boldsymbol x + \\sum_{i=1}^{d-1}\\sum_{j=i+1}^d \\langle \\boldsymbol v_i, \\boldsymbol v_j \\rangle x_ix_j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8386","type":"unstyled","text":" 此时,再对参数","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol v_s","data":{"mathjax":true,"teX":"\\boldsymbol v_s"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8388","type":"unstyled","text":"求梯度的结果为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\begin{aligned} \\nabla_{\\boldsymbol v_s} \\hat y \u0026= \\nabla_{\\boldsymbol v_s} \\left(\\sum_{i=1}^{d-1}\\sum_{j=i+1}^d \\langle \\boldsymbol v_i, \\boldsymbol v_j \\rangle x_ix_j \\right) \\\\ \u0026= \\nabla_{\\boldsymbol v_s} \\left( \\sum_{j=s+1}^d \\langle \\boldsymbol v_s, \\boldsymbol v_j\\rangle x_sx_j + \\sum_{i=1}^{s-1} \\langle \\boldsymbol v_i, \\boldsymbol v_s \\rangle x_ix_s \\right) \\\\ \u0026= x_s \\sum_{j=s+1}^d x_j\\boldsymbol v_j + x_s \\sum_{i=1}^{s-1} x_i \\boldsymbol v_i \\\\ \u0026= x_s \\sum_{i=1}^d x_i \\boldsymbol v_i - x_s^2 \\boldsymbol v_s \\end{aligned}","data":{"mathjax":true,"teX":"\\begin{aligned} \\nabla_{\\boldsymbol v_s} \\hat y \u0026= \\nabla_{\\boldsymbol v_s} \\left(\\sum_{i=1}^{d-1}\\sum_{j=i+1}^d \\langle \\boldsymbol v_i, \\boldsymbol v_j \\rangle x_ix_j \\right) \\\\ \u0026= \\nabla_{\\boldsymbol v_s} \\left( \\sum_{j=s+1}^d \\langle \\boldsymbol v_s, \\boldsymbol v_j\\rangle x_sx_j + \\sum_{i=1}^{s-1} \\langle \\boldsymbol v_i, \\boldsymbol v_s \\rangle x_ix_s \\right) \\\\ \u0026= x_s \\sum_{j=s+1}^d x_j\\boldsymbol v_j + x_s \\sum_{i=1}^{s-1} x_i \\boldsymbol v_i \\\\ \u0026= x_s \\sum_{i=1}^d x_i \\boldsymbol v_i - x_s^2 \\boldsymbol v_s \\end{aligned}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8390","type":"unstyled","text":"  上面的计算过程中,为了简洁,我们采用了不太严谨的写法,当 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"s=1","data":{"mathjax":true,"teX":"s=1"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8392","type":"unstyled","text":" 或 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"s=d","data":{"mathjax":true,"teX":"s=d"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8394","type":"unstyled","text":" 时会出现求和下界大于上界的情况。此时我们规定求和的结果为零。如果要完全展开,只需要做类似于 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\sum\\limits_{j=s+1}^d \\langle \\boldsymbol v_s, \\boldsymbol v_j \\rangle x_sx_j","data":{"mathjax":true,"teX":"\\sum\\limits_{j=s+1}^d \\langle \\boldsymbol v_s, \\boldsymbol v_j \\rangle x_sx_j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8396","type":"unstyled","text":" 变为 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\sum\\limits_{j=s}^d \\langle \\boldsymbol v_s, \\boldsymbol v_j \\rangle x_sx_j - \\langle \\boldsymbol v_s, \\boldsymbol v_s \\rangle x_s^2","data":{"mathjax":true,"teX":"\\sum\\limits_{j=s}^d \\langle \\boldsymbol v_s, \\boldsymbol v_j \\rangle x_sx_j - \\langle \\boldsymbol v_s, \\boldsymbol v_s \\rangle x_s^2"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8398","type":"unstyled","text":" 的裂项操作即可。从该结果中可以看出,只要 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"x_s \\neq 0","data":{"mathjax":true,"teX":"x_s \\neq 0"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8400","type":"unstyled","text":",参数","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol v_s","data":{"mathjax":true,"teX":"\\boldsymbol v_s"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8402","type":"unstyled","text":"的梯度就不为零,可以用梯度相关的算法对其更新。因此,即使特征向量","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol x","data":{"mathjax":true,"teX":"\\boldsymbol x"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8404","type":"unstyled","text":"非常稀疏,FM模型也可以正常进行训练。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8405","type":"unstyled","text":"  至此,我们的模型还存在一个问题。双线性模型考虑不同特征之间乘积的做法,虽然提升了模型的能力,但也引入了额外的计算开销。对一个样本来说,线性模型需要计算","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol\\theta^\\mathrm{T} \\boldsymbol x","data":{"mathjax":true,"teX":"\\boldsymbol\\theta^\\mathrm{T} \\boldsymbol x"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8407","type":"unstyled","text":",时间复杂度为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"O(d)","data":{"mathjax":true,"teX":"O(d)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8409","type":"unstyled","text":";而我们的模型需要计算每一对特征","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"(x_i,x_j)","data":{"mathjax":true,"teX":"(x_i,x_j)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8411","type":"unstyled","text":"的乘积,以及参数","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol v_i","data":{"mathjax":true,"teX":"\\boldsymbol v_i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8413","type":"unstyled","text":"与","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol v_j","data":{"mathjax":true,"teX":"\\boldsymbol v_j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8415","type":"unstyled","text":"的内积,时间复杂度为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"O(kd^2)","data":{"mathjax":true,"teX":"O(kd^2)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8417","type":"unstyled","text":"。上面已经讲过,多热编码的特征向量维度常常特别高,因此这一时间开销是相当巨大的。但是,我们可以对改进后的预测函数 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\hat y(\\boldsymbol x) = \\theta_0 + \\sum\\limits_{i=1}^d \\theta_i x_i + \\sum\\limits_{i=1}^{d-1}\\sum\\limits_{j=i+1}^d w_{ij}x_ix_j","data":{"mathjax":true,"teX":"\\hat y(\\boldsymbol x) = \\theta_0 + \\sum\\limits_{i=1}^d \\theta_i x_i + \\sum\\limits_{i=1}^{d-1}\\sum\\limits_{j=i+1}^d w_{ij}x_ix_j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8419","type":"unstyled","text":" 中的最后一项做一些变形,改变计算顺序来降低时间复杂度。变形方式如下:","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\begin{aligned} \\sum_{i=1}^{d-1}\\sum_{j=i+1}^d \\langle \\boldsymbol v_i, \\boldsymbol v_j \\rangle x_ix_j \u0026= \\frac{1}{2} \\left(\\sum_{i=1}^d\\sum_{j=1}^d \\langle \\boldsymbol v_i, \\boldsymbol v_j \\rangle x_ix_j - \\sum_{i=1}^d \\langle \\boldsymbol v_i, \\boldsymbol v_i \\rangle x_i^2 \\right) \\\\ \u0026= \\frac{1}{2} \\left(\\sum_{i=1}^d\\sum_{j=1}^d \\langle x_i \\boldsymbol v_{i}, x_j \\boldsymbol v_{j} \\rangle - \\sum_{i=1}^d \\langle x_i \\boldsymbol v_i, x_i \\boldsymbol v_i \\rangle \\right) \\\\ \u0026= \\frac12 \\left\\langle \\sum_{i=1}^d x_i \\boldsymbol v_i, \\sum_{j=1}^d x_j \\boldsymbol v_j\\right\\rangle - \\frac12 \\sum_{i=1}^d \\langle x_i \\boldsymbol v_i, x_i \\boldsymbol v_i \\rangle \\\\ \u0026= \\frac12 \\sum_{l=1}^k \\left(\\sum_{i=1}^d v_{il}x_i \\right)^2 - \\frac12 \\sum_{l=1}^k \\sum_{i=1}^d v_{il}^2x_i^2 \\end{aligned}","data":{"mathjax":true,"teX":"\\begin{aligned} \\sum_{i=1}^{d-1}\\sum_{j=i+1}^d \\langle \\boldsymbol v_i, \\boldsymbol v_j \\rangle x_ix_j \u0026= \\frac{1}{2} \\left(\\sum_{i=1}^d\\sum_{j=1}^d \\langle \\boldsymbol v_i, \\boldsymbol v_j \\rangle x_ix_j - \\sum_{i=1}^d \\langle \\boldsymbol v_i, \\boldsymbol v_i \\rangle x_i^2 \\right) \\\\ \u0026= \\frac{1}{2} \\left(\\sum_{i=1}^d\\sum_{j=1}^d \\langle x_i \\boldsymbol v_{i}, x_j \\boldsymbol v_{j} \\rangle - \\sum_{i=1}^d \\langle x_i \\boldsymbol v_i, x_i \\boldsymbol v_i \\rangle \\right) \\\\ \u0026= \\frac12 \\left\\langle \\sum_{i=1}^d x_i \\boldsymbol v_i, \\sum_{j=1}^d x_j \\boldsymbol v_j\\right\\rangle - \\frac12 \\sum_{i=1}^d \\langle x_i \\boldsymbol v_i, x_i \\boldsymbol v_i \\rangle \\\\ \u0026= \\frac12 \\sum_{l=1}^k \\left(\\sum_{i=1}^d v_{il}x_i \\right)^2 - \\frac12 \\sum_{l=1}^k \\sum_{i=1}^d v_{il}^2x_i^2 \\end{aligned}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8421","type":"unstyled","text":"  在变形的第二步和第三步,我们利用了向量内积的双线性性质,将标量","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"x_i, x_j","data":{"mathjax":true,"teX":"x_i, x_j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8423","type":"unstyled","text":"以及求和都移到内积中去。最后的结果中只含有两重求和,外层为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"k","data":{"mathjax":true,"teX":"k"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8425","type":"unstyled","text":"次,内层为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"d","data":{"mathjax":true,"teX":"d"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8427","type":"unstyled","text":"次,因此整体的时间复杂度为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"O(kd)","data":{"mathjax":true,"teX":"O(kd)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8429","type":"unstyled","text":"。这样,FM的时间复杂度关于特征规模","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"d","data":{"mathjax":true,"teX":"d"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8431","type":"unstyled","text":"的增长从平方变为线性,得到了大幅优化。至此,FM的预测公式为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\hat y(\\boldsymbol x) = \\theta_0 + \\sum_{i=1}^d \\theta_i x_i + \\frac12 \\sum_{l=1}^k \\left(\\left(\\sum_{i=1}^d v_{il}x_i \\right)^2 - \\sum_{i=1}^d v_{il}^2 x_i^2 \\right)","data":{"mathjax":true,"teX":"\\hat y(\\boldsymbol x) = \\theta_0 + \\sum_{i=1}^d \\theta_i x_i + \\frac12 \\sum_{l=1}^k \\left(\\left(\\sum_{i=1}^d v_{il}x_i \\right)^2 - \\sum_{i=1}^d v_{il}^2 x_i^2 \\right)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8433","type":"unstyled","text":" 如果要做分类任务,只需要再加上softmax函数即可。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8434","type":"unstyled","text":"  在上面的模型中,我们只考虑了两个特征之间的组合,因此该FM也被称为二阶FM。如果进一步考虑多个特征的组合,如","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"x_ix_jx_k","data":{"mathjax":true,"teX":"x_ix_jx_k"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8436","type":"unstyled","text":",就可以得到高阶的FM模型。由于高阶FM较为复杂,并且也不再是双线性模型,本文在此略去,如果感兴趣可以自行查阅相关资料。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8437","type":"header-three","text":"四、动手实现因子分解机","depth":0,"inlineStyleRanges":[],"entityRanges":[],"data":{"text":"%E5%9B%9B%E3%80%81%E5%8A%A8%E6%89%8B%E5%AE%9E%E7%8E%B0%E5%9B%A0%E5%AD%90%E5%88%86%E8%A7%A3%E6%9C%BA"}},{"key":"8438","type":"unstyled","text":"  下面,我们来动手实现二阶FM模型。本节采用的数据集是为FM制作的示例数据集fm_dataset.csv,包含了某个用户浏览过的商品的特征,以及用户是否点击过这个商品。数据集的每一行包含一个商品,前24列是其特征,最后一列是0或1,分别表示用户没有或有点击该商品。我们的目标是根据输入特征预测用户在测试集上的行为,是一个二分类问题。我们先导入必要的模块和数据集并处理数据,将其划分为训练集和测试集。","depth":0,"inlineStyleRanges":[{"offset":39,"length":14,"style":"CODE"}],"entityRanges":[]},{"key":"8439","type":"code-block","text":"import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import metrics # sklearn中的评价指标函数库\nfrom tqdm import tqdm\n\n# 导入数据集\ndata = np.loadtxt('fm_dataset.csv', delimiter=',')\n\n# 划分数据集\nnp.random.seed(0)\nratio = 0.8\nsplit = int(ratio * len(data))\nx_train = data[:split, :-1]\ny_train = data[:split, -1]\nx_test = data[split:, :-1]\ny_test = data[split:, -1]\n# 特征数\nfeature_num = x_train.shape[1]\nprint('训练集大小:', len(x_train))\nprint('测试集大小:', len(x_test))\nprint('特征数:', feature_num)","depth":0,"inlineStyleRanges":[],"entityRanges":[],"data":{"syntax":"javascript"}},{"key":"8440","type":"atomic","text":"图片","depth":0,"inlineStyleRanges":[],"entityRanges":[{"key":11,"offset":0,"length":4}]},{"key":"8441","type":"unstyled","text":"  然后,我们将FM模型定义成类。与MF相同,我们在类中实现预测和梯度更新方法。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8442","type":"code-block","text":"class FM:\n\n def __init__(self, feature_num, vector_dim):\n # vector_dim代表公式中的k,为向量v的维度\n self.theta0 = 0.0 # 常数项\n self.theta = np.zeros(feature_num) # 线性参数\n self.v = np.random.normal(size=(feature_num, vector_dim)) # 双线性参数\n self.eps = 1e-6 # 精度参数\n \n def _logistic(self, x):\n # 工具函数,用于将预测转化为概率\n return 1 / (1 + np.exp(-x))\n\n def pred(self, x):\n # 线性部分\n linear_term = self.theta0 + x @ self.theta\n # 双线性部分\n square_of_sum = np.square(x @ self.v)\n sum_of_square = np.square(x) @ np.square(self.v)\n # 最终预测\n y_pred = self._logistic(linear_term + 0.5 * np.sum(square_of_sum - sum_of_square, axis=1))\n # 为了防止后续梯度过大,对预测值进行裁剪,将其限制在某一范围内\n y_pred = np.clip(y_pred, self.eps, 1 - self.eps)\n return y_pred\n \n def update(self, grad0, grad_theta, grad_v, lr):\n self.theta0 -= lr * grad0\n self.theta -= lr * grad_theta\n self.v -= lr * grad_v","depth":0,"inlineStyleRanges":[],"entityRanges":[],"data":{"syntax":"javascript"}},{"key":"8443","type":"unstyled","text":"  对于分类任务,我们仍用MLE作为训练时的损失函数。在测试集上,我们采用AUC作为评价指标。由于我们在逻辑斯谛回归中已经动手实现过AUC,简单起见,这里我们就直接使用sklearn中的函数直接计算AUC。我们用SGD进行参数更新,训练完成后,我们把训练过程中的准确率和AUC绘制出来。","depth":0,"inlineStyleRanges":[],"entityRanges":[{"key":12,"offset":52,"length":6}]},{"key":"8444","type":"code-block","text":"# 超参数设置,包括学习率、训练轮数等\nvector_dim = 16\nlearning_rate = 0.01\nlbd = 0.05\nmax_training_step = 200\nbatch_size = 32\n\n# 初始化模型\nnp.random.seed(0)\nmodel = FM(feature_num, vector_dim)\n\ntrain_acc = []\ntest_acc = []\ntrain_auc = []\ntest_auc = []\n\nwith tqdm(range(max_training_step)) as pbar:\n for epoch in pbar:\n st = 0\n while st \u003c len(x_train):\n ed = min(st + batch_size, len(x_train))\n X = x_train[st: ed]\n Y = y_train[st: ed]\n st += batch_size\n # 计算模型预测\n y_pred = model.pred(X)\n # 计算交叉熵损失\n cross_entropy = -Y * np.log(y_pred) - (1 - Y) * np.log(1 - y_pred)\n loss = np.sum(cross_entropy)\n # 计算损失函数对y的梯度,再根据链式法则得到总梯度\n grad_y = (y_pred - Y).reshape(-1, 1)\n # 计算y对参数的梯度\n # 常数项\n grad0 = np.sum(grad_y * (1 / len(X) + lbd))\n # 线性项\n grad_theta = np.sum(grad_y * (X / len(X) + lbd * model.theta), axis=0)\n # 双线性项\n grad_v = np.zeros((feature_num, vector_dim))\n for i, x in enumerate(X):\n # 先计算sum(x_i * v_i)\n xv = x @ model.v\n grad_vi = np.zeros((feature_num, vector_dim))\n for s in range(feature_num):\n grad_vi[s] += x[s] * xv - (x[s] ** 2) * model.v[s]\n grad_v += grad_y[i] * grad_vi\n grad_v = grad_v / len(X) + lbd * model.v\n model.update(grad0, grad_theta, grad_v, learning_rate)\n\n pbar.set_postfix({\n '训练轮数': epoch,\n '训练损失': f'{loss:.4f}',\n '训练集准确率': train_acc[-1] if train_acc else None,\n '测试集准确率': test_acc[-1] if test_acc else None\n })\n # 计算模型预测的准确率和AUC\n # 预测准确率,阈值设置为0.5\n y_train_pred = (model.pred(x_train) \u003e= 0.5)\n acc = np.mean(y_train_pred == y_train)\n train_acc.append(acc)\n auc = metrics.roc_auc_score(y_train, y_train_pred) # sklearn中的AUC函数\n train_auc.append(auc)\n\n y_test_pred = (model.pred(x_test) \u003e= 0.5)\n acc = np.mean(y_test_pred == y_test)\n test_acc.append(acc)\n auc = metrics.roc_auc_score(y_test, y_test_pred) \n test_auc.append(auc)\n \nprint(f'测试集准确率:{test_acc[-1]},\\t测试集AUC:{test_auc[-1]}')","depth":0,"inlineStyleRanges":[],"entityRanges":[],"data":{"syntax":"javascript"}},{"key":"8445","type":"atomic","text":"图片","depth":0,"inlineStyleRanges":[],"entityRanges":[{"key":13,"offset":0,"length":4}]},{"key":"8446","type":"unstyled","text":"  最后,我们把训练过程中在训练集和测试集上的精确率和AUC绘制出来,观察训练效果。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8447","type":"code-block","text":"# 绘制训练曲线 \nplt.figure(figsize=(13, 5))\nx_plot = np.arange(len(train_acc)) + 1\n\nplt.subplot(121)\nplt.plot(x_plot, train_acc, color='blue', label='train acc')\nplt.plot(x_plot, test_acc, color='red', ls='--', label='test acc')\nplt.xlabel('Epoch')\nplt.ylabel('Accuracy')\nplt.legend()\n\nplt.subplot(122)\nplt.plot(x_plot, train_auc, color='blue', label='train AUC')\nplt.plot(x_plot, test_auc, color='red', ls='--', label='test AUC')\nplt.xlabel('Epoch')\nplt.ylabel('AUC')\nplt.legend()\nplt.show()","depth":0,"inlineStyleRanges":[],"entityRanges":[],"data":{"syntax":"javascript"}},{"key":"8448","type":"atomic","text":"图片","depth":0,"inlineStyleRanges":[],"entityRanges":[{"key":14,"offset":0,"length":4}]},{"key":"8449","type":"header-three","text":"五、拓展:概率矩阵分解","depth":0,"inlineStyleRanges":[],"entityRanges":[],"data":{"text":"%E4%BA%94%E3%80%81%E6%8B%93%E5%B1%95%EF%BC%9A%E6%A6%82%E7%8E%87%E7%9F%A9%E9%98%B5%E5%88%86%E8%A7%A3"}},{"key":"8450","type":"unstyled","text":" 概率矩阵分解(probabilistic matrix factorization,PMF)是另一种常用的双线性模型。与矩阵分解模型不同,它对用户给电影的评分","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"r_{ij}","data":{"mathjax":true,"teX":"r_{ij}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8452","type":"unstyled","text":"的分布进行了先验假设,认为其满足正态分布:","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"r_{ij} \\sim \\mathcal{N}(\\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j, \\sigma^2)","data":{"mathjax":true,"teX":"r_{ij} \\sim \\mathcal{N}(\\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j, \\sigma^2)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8454","type":"unstyled","text":" 其中","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\sigma^2","data":{"mathjax":true,"teX":"\\sigma^2"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8456","type":"unstyled","text":"是正态分布的方差,与用户和电影无关。注意,","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol p_i","data":{"mathjax":true,"teX":"\\boldsymbol p_i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8458","type":"unstyled","text":"与","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol q_j","data":{"mathjax":true,"teX":"\\boldsymbol q_j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8460","type":"unstyled","text":"都是未知的。记 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"I_{ij} = \\mathbb{I}(r_{ij} \\text{存在})","data":{"mathjax":true,"teX":"I_{ij} = \\mathbb{I}(r_{ij} \\text{存在})"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8462","type":"unstyled","text":",即当用户","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"i","data":{"mathjax":true,"teX":"i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8464","type":"unstyled","text":"对电影","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"j","data":{"mathjax":true,"teX":"j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8466","type":"unstyled","text":"打过分时 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"I_{ij}=1","data":{"mathjax":true,"teX":"I_{ij}=1"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8468","type":"unstyled","text":",否则 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"I_{ij}=0","data":{"mathjax":true,"teX":"I_{ij}=0"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8470","type":"unstyled","text":"。再假设不同的评分采样之间互相独立,那么,我们观测到的","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol R","data":{"mathjax":true,"teX":"\\boldsymbol R"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8472","type":"unstyled","text":"出现的概率是","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"P(\\boldsymbol R | \\boldsymbol P, \\boldsymbol Q, \\sigma) = \\prod_{i=1}^N\\prod_{j=1}^M p_\\mathcal{N}(r_{ij}| \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j, \\sigma^2)^{I_{ij}}","data":{"mathjax":true,"teX":"P(\\boldsymbol R | \\boldsymbol P, \\boldsymbol Q, \\sigma) = \\prod_{i=1}^N\\prod_{j=1}^M p_\\mathcal{N}(r_{ij}| \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j, \\sigma^2)^{I_{ij}}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8474","type":"unstyled","text":" 这里,我们用 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"p_\\mathcal{N}(x|\\mu,\\sigma^2)","data":{"mathjax":true,"teX":"p_\\mathcal{N}(x|\\mu,\\sigma^2)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8476","type":"unstyled","text":" 表示正态分布 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\mathcal{N}(\\mu, \\sigma^2)","data":{"mathjax":true,"teX":"\\mathcal{N}(\\mu, \\sigma^2)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8478","type":"unstyled","text":" 的概率密度函数,其完整表达式为 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"p_\\mathcal{N}(x|\\mu,\\sigma^2) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\text{e}^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}","data":{"mathjax":true,"teX":"p_\\mathcal{N}(x|\\mu,\\sigma^2) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\text{e}^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8480","type":"unstyled","text":" 对于那些空缺的","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"r_{ij}","data":{"mathjax":true,"teX":"r_{ij}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8482","type":"unstyled","text":",由于 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"I_{ij}=0","data":{"mathjax":true,"teX":"I_{ij}=0"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8484","type":"unstyled","text":",","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"p_\\mathcal{N}(r_{ij}|\\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j, \\sigma^2)^{I_{ij}}=1","data":{"mathjax":true,"teX":"p_\\mathcal{N}(r_{ij}|\\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j, \\sigma^2)^{I_{ij}}=1"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8486","type":"unstyled","text":",对连乘没有贡献,最终的概率只由已知部分计算得出。接下来,我们进一步假设用户的喜好","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol p_i","data":{"mathjax":true,"teX":"\\boldsymbol p_i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8488","type":"unstyled","text":"和电影的特征","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol q_j","data":{"mathjax":true,"teX":"\\boldsymbol q_j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8490","type":"unstyled","text":"都满足均值为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol 0","data":{"mathjax":true,"teX":"\\boldsymbol 0"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8492","type":"unstyled","text":"的正态分布,协方差矩阵分别为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\sigma_P^2\\boldsymbol I","data":{"mathjax":true,"teX":"\\sigma_P^2\\boldsymbol I"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8494","type":"unstyled","text":"和","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\sigma_Q^2 \\boldsymbol I","data":{"mathjax":true,"teX":"\\sigma_Q^2 \\boldsymbol I"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8496","type":"unstyled","text":",即","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"P(\\boldsymbol P | \\sigma_P) = \\prod_{i=1}^N p_\\mathcal{N}(\\boldsymbol p_i| \\boldsymbol 0, \\sigma_P^2 \\boldsymbol I), \\quad P(\\boldsymbol Q | \\sigma_Q) = \\prod_{j=1}^M p_\\mathcal{N}(\\boldsymbol q_j | \\boldsymbol 0, \\sigma_Q^2 \\boldsymbol I)","data":{"mathjax":true,"teX":"P(\\boldsymbol P | \\sigma_P) = \\prod_{i=1}^N p_\\mathcal{N}(\\boldsymbol p_i| \\boldsymbol 0, \\sigma_P^2 \\boldsymbol I), \\quad P(\\boldsymbol Q | \\sigma_Q) = \\prod_{j=1}^M p_\\mathcal{N}(\\boldsymbol q_j | \\boldsymbol 0, \\sigma_Q^2 \\boldsymbol I)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8498","type":"unstyled","text":" 根据全概率公式 ","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"P(X,Y) = P(X|Y)P(Y)","data":{"mathjax":true,"teX":"P(X,Y) = P(X|Y)P(Y)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8500","type":"unstyled","text":",并注意到","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol R","data":{"mathjax":true,"teX":"\\boldsymbol R"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8502","type":"unstyled","text":"与","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\sigma_P, \\sigma_Q","data":{"mathjax":true,"teX":"\\sigma_P, \\sigma_Q"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8504","type":"unstyled","text":"无关,我们可以计算出","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol P","data":{"mathjax":true,"teX":"\\boldsymbol P"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8506","type":"unstyled","text":"与","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol Q","data":{"mathjax":true,"teX":"\\boldsymbol Q"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8508","type":"unstyled","text":"的后验概率为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\small\\begin{aligned} P(\\boldsymbol P, \\boldsymbol Q | \\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q) \u0026= \\frac{P(\\boldsymbol P, \\boldsymbol Q, \\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q)}{P(\\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q)} \\\\[2ex] \u0026= \\frac{P(\\boldsymbol R | \\boldsymbol P, \\boldsymbol Q, \\sigma)P(\\boldsymbol P, \\boldsymbol Q | \\sigma_P, \\sigma_Q) P(\\sigma, \\sigma_P, \\sigma_Q)}{P(\\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q)} \\\\[2ex] \u0026= C \\cdot P(\\boldsymbol R | \\boldsymbol P, \\boldsymbol Q, \\sigma)P(\\boldsymbol P|\\sigma_P)P(\\boldsymbol Q|\\sigma_Q) \\\\ \u0026= C\\prod_{i=1}^N\\prod_{j=1}^M p_\\mathcal{N}(r_{ij}| \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j, \\sigma^2)^{I_{ij}} \\cdot \\prod_{i=1}^N p_\\mathcal{N}(\\boldsymbol p_i| \\boldsymbol 0, \\sigma_P^2 \\boldsymbol I) \\cdot \\prod_{j=1}^M p_\\mathcal{N}(\\boldsymbol q_j | \\boldsymbol 0, \\sigma_Q^2 \\boldsymbol I) \\end{aligned}","data":{"mathjax":true,"teX":"\\small\\begin{aligned} P(\\boldsymbol P, \\boldsymbol Q | \\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q) \u0026= \\frac{P(\\boldsymbol P, \\boldsymbol Q, \\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q)}{P(\\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q)} \\\\[2ex] \u0026= \\frac{P(\\boldsymbol R | \\boldsymbol P, \\boldsymbol Q, \\sigma)P(\\boldsymbol P, \\boldsymbol Q | \\sigma_P, \\sigma_Q) P(\\sigma, \\sigma_P, \\sigma_Q)}{P(\\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q)} \\\\[2ex] \u0026= C \\cdot P(\\boldsymbol R | \\boldsymbol P, \\boldsymbol Q, \\sigma)P(\\boldsymbol P|\\sigma_P)P(\\boldsymbol Q|\\sigma_Q) \\\\ \u0026= C\\prod_{i=1}^N\\prod_{j=1}^M p_\\mathcal{N}(r_{ij}| \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j, \\sigma^2)^{I_{ij}} \\cdot \\prod_{i=1}^N p_\\mathcal{N}(\\boldsymbol p_i| \\boldsymbol 0, \\sigma_P^2 \\boldsymbol I) \\cdot \\prod_{j=1}^M p_\\mathcal{N}(\\boldsymbol q_j | \\boldsymbol 0, \\sigma_Q^2 \\boldsymbol I) \\end{aligned}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8510","type":"unstyled","text":" 其中","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"C","data":{"mathjax":true,"teX":"C"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8512","type":"unstyled","text":"是常数。为了简化这一表达式,我们利用与MLE中相同的技巧,将上式取对数,从而把连乘变为求和:","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\begin{aligned} \\log P(\\boldsymbol P, \\boldsymbol Q | \\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q) \u0026= \\sum_{i=1}^N\\sum_{j=1}^M I_{ij} \\log p_\\mathcal{N}(r_{ij} | \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j, \\sigma^2) + \\sum_{i=1}^N \\log p_\\mathcal{N}(\\boldsymbol p_i| \\boldsymbol 0, \\sigma_P^2 \\boldsymbol I) \\\\ \u0026\\quad+ \\sum_{j=1}^M \\log p_\\mathcal{N}(\\boldsymbol q_j | \\boldsymbol 0, \\sigma_Q^2 \\boldsymbol I) + \\log C \\end{aligned}","data":{"mathjax":true,"teX":"\\begin{aligned} \\log P(\\boldsymbol P, \\boldsymbol Q | \\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q) \u0026= \\sum_{i=1}^N\\sum_{j=1}^M I_{ij} \\log p_\\mathcal{N}(r_{ij} | \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j, \\sigma^2) + \\sum_{i=1}^N \\log p_\\mathcal{N}(\\boldsymbol p_i| \\boldsymbol 0, \\sigma_P^2 \\boldsymbol I) \\\\ \u0026\\quad+ \\sum_{j=1}^M \\log p_\\mathcal{N}(\\boldsymbol q_j | \\boldsymbol 0, \\sigma_Q^2 \\boldsymbol I) + \\log C \\end{aligned}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8514","type":"unstyled","text":" 再代入","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"p_\\mathcal{N}","data":{"mathjax":true,"teX":"p_\\mathcal{N}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8516","type":"unstyled","text":"取对数后的表达式","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\log p_\\mathcal{N}(x|\\mu, \\sigma^2) = -\\frac12 \\log (2\\pi\\sigma^2) - \\frac{(x-\\mu)^2}{2\\sigma^2}","data":{"mathjax":true,"teX":"\\log p_\\mathcal{N}(x|\\mu, \\sigma^2) = -\\frac12 \\log (2\\pi\\sigma^2) - \\frac{(x-\\mu)^2}{2\\sigma^2}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8518","type":"unstyled","text":" 计算得到","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\small\\begin{aligned} \\log P(\\boldsymbol P, \\boldsymbol Q | \\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q) \u0026= -\\frac12 \\log(2\\pi\\sigma^2) \\sum_{i=1}^N\\sum_{j=1}^M I_{ij} - \\frac{1}{2\\sigma^2}\\sum_{i=1}^N\\sum_{j=1}^M I_{ij}(r_{ij} - \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j)^2 \\\\ \u0026\\quad-\\frac{Nd}{2} \\log(2\\pi\\sigma_P^2) - \\frac{1}{2\\sigma_P^2}\\sum_{i=1}^N \\boldsymbol p_i^\\mathrm{T} \\boldsymbol p_i \\\\ \u0026\\quad-\\frac{Md}{2} \\log(2\\pi\\sigma_Q^2) - \\frac{1}{2\\sigma_Q^2}\\sum_{j=1}^M \\boldsymbol q_j^\\mathrm{T} \\boldsymbol q_j + \\log C \\\\ \u0026= -\\frac{1}{\\sigma^2} \\left[\\frac12 \\sum_{i=1}^N\\sum_{j=1}^M I_{ij}(r_{ij} - \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j)^2 + \\frac{\\lambda_P}{2} \\lVert \\boldsymbol P \\lVert_F^2 + \\frac{\\lambda_Q}{2} \\lVert \\boldsymbol Q \\lVert_F^2 \\right] + C_1 \\end{aligned}","data":{"mathjax":true,"teX":"\\small\\begin{aligned} \\log P(\\boldsymbol P, \\boldsymbol Q | \\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q) \u0026= -\\frac12 \\log(2\\pi\\sigma^2) \\sum_{i=1}^N\\sum_{j=1}^M I_{ij} - \\frac{1}{2\\sigma^2}\\sum_{i=1}^N\\sum_{j=1}^M I_{ij}(r_{ij} - \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j)^2 \\\\ \u0026\\quad-\\frac{Nd}{2} \\log(2\\pi\\sigma_P^2) - \\frac{1}{2\\sigma_P^2}\\sum_{i=1}^N \\boldsymbol p_i^\\mathrm{T} \\boldsymbol p_i \\\\ \u0026\\quad-\\frac{Md}{2} \\log(2\\pi\\sigma_Q^2) - \\frac{1}{2\\sigma_Q^2}\\sum_{j=1}^M \\boldsymbol q_j^\\mathrm{T} \\boldsymbol q_j + \\log C \\\\ \u0026= -\\frac{1}{\\sigma^2} \\left[\\frac12 \\sum_{i=1}^N\\sum_{j=1}^M I_{ij}(r_{ij} - \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j)^2 + \\frac{\\lambda_P}{2} \\lVert \\boldsymbol P \\lVert_F^2 + \\frac{\\lambda_Q}{2} \\lVert \\boldsymbol Q \\lVert_F^2 \\right] + C_1 \\end{aligned}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8520","type":"unstyled","text":" 其中,","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\lambda_P = \\sigma^2/\\sigma_P^2","data":{"mathjax":true,"teX":"\\lambda_P = \\sigma^2/\\sigma_P^2"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8522","type":"unstyled","text":",","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\lambda_Q = \\sigma^2 / \\sigma_Q^2","data":{"mathjax":true,"teX":"\\lambda_Q = \\sigma^2 / \\sigma_Q^2"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8524","type":"unstyled","text":",","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"C_1","data":{"mathjax":true,"teX":"C_1"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8526","type":"unstyled","text":"是与参数","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol P","data":{"mathjax":true,"teX":"\\boldsymbol P"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8528","type":"unstyled","text":"和","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol Q","data":{"mathjax":true,"teX":"\\boldsymbol Q"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8530","type":"unstyled","text":"无关的常数。根据最大似然的思想,我们应当最大化上面计算出的对数概率。因此,定义损失函数为","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"J(\\boldsymbol P, \\boldsymbol Q) = \\frac12 \\sum_{i=1}^N\\sum_{j=1}^M I_{ij}(r_{ij} - \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j)^2 + \\frac{\\lambda_P}{2} \\lVert \\boldsymbol P \\lVert_F^2 + \\frac{\\lambda_Q}{2} \\lVert \\boldsymbol Q \\lVert_F^2","data":{"mathjax":true,"teX":"J(\\boldsymbol P, \\boldsymbol Q) = \\frac12 \\sum_{i=1}^N\\sum_{j=1}^M I_{ij}(r_{ij} - \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j)^2 + \\frac{\\lambda_P}{2} \\lVert \\boldsymbol P \\lVert_F^2 + \\frac{\\lambda_Q}{2} \\lVert \\boldsymbol Q \\lVert_F^2"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8532","type":"unstyled","text":" 于是,最大化对数概率就等价于最小化损失函数","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"J(\\boldsymbol P, \\boldsymbol Q)","data":{"mathjax":true,"teX":"J(\\boldsymbol P, \\boldsymbol Q)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8534","type":"unstyled","text":"。并且,这一损失函数恰好为目标值","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"r_{ij}","data":{"mathjax":true,"teX":"r_{ij}"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8536","type":"unstyled","text":"与参数内积","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_i","data":{"mathjax":true,"teX":"\\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8538","type":"unstyled","text":"之间的平方损失,再加上","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"L_2","data":{"mathjax":true,"teX":"L_2"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8540","type":"unstyled","text":"正则化的形式。由于向量内积是双线性函数,PMF模型也属于双线性模型的一种。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8541","type":"unstyled","text":"  将损失函数对","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol p_i","data":{"mathjax":true,"teX":"\\boldsymbol p_i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8543","type":"unstyled","text":"求导,得到","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\nabla_{\\boldsymbol p_i} J(\\boldsymbol P, \\boldsymbol Q) = \\sum_{j=1}^M I_{ij}(r_{ij} - \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j) \\boldsymbol q_j - \\lambda_P \\boldsymbol p_i","data":{"mathjax":true,"teX":"\\nabla_{\\boldsymbol p_i} J(\\boldsymbol P, \\boldsymbol Q) = \\sum_{j=1}^M I_{ij}(r_{ij} - \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j) \\boldsymbol q_j - \\lambda_P \\boldsymbol p_i"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8545","type":"unstyled","text":" 令梯度为零,解得","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol p_i = \\left(\\sum_{j=1}^MI_{ij}\\boldsymbol q_j\\boldsymbol q_j^\\mathrm{T} + \\lambda_P \\boldsymbol I\\right)^{-1} \\left(\\sum_{j=1}^M I_{ij}r_{ij}\\boldsymbol q_j\\right)","data":{"mathjax":true,"teX":"\\boldsymbol p_i = \\left(\\sum_{j=1}^MI_{ij}\\boldsymbol q_j\\boldsymbol q_j^\\mathrm{T} + \\lambda_P \\boldsymbol I\\right)^{-1} \\left(\\sum_{j=1}^M I_{ij}r_{ij}\\boldsymbol q_j\\right)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8547","type":"unstyled","text":"  在正则化约束一节中我们讲过,根据矩阵相关的理论,只要","depth":0,"inlineStyleRanges":[],"entityRanges":[{"key":15,"offset":3,"length":5}]},{"type":"atomic","text":"\\lambda_P","data":{"mathjax":true,"teX":"\\lambda_P"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8549","type":"unstyled","text":"足够大,上式的第一项逆矩阵就总是存在。同理,对","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol q_j","data":{"mathjax":true,"teX":"\\boldsymbol q_j"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8551","type":"unstyled","text":"也有类似的结果。因此,我们可以通过如上形式的","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"J(\\boldsymbol P, \\boldsymbol Q)","data":{"mathjax":true,"teX":"J(\\boldsymbol P, \\boldsymbol Q)"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8553","type":"unstyled","text":"来求解参数","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol P","data":{"mathjax":true,"teX":"\\boldsymbol P"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8555","type":"unstyled","text":"与","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"\\boldsymbol Q","data":{"mathjax":true,"teX":"\\boldsymbol Q"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8557","type":"unstyled","text":"。在参数的高斯分布假设下,我们自然导出了带有","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"type":"atomic","text":"L_2","data":{"mathjax":true,"teX":"L_2"},"inlineStyleRanges":[],"entityRanges":[]},{"key":"8559","type":"unstyled","text":"正则化的MF模型,这并不是偶然。我们会在概率图模型中进一步阐释其中的原理。","depth":0,"inlineStyleRanges":[],"entityRanges":[]},{"key":"8560","type":"blockquote","text":" 附:以上文中的数据集及相关资源下载地址:\n 链接:https://pan.quark.cn/s/0f31109b2b13\n 提取码:gTBK","depth":0,"inlineStyleRanges":[{"offset":1,"length":1,"style":"BOLD"}],"entityRanges":[{"key":16,"offset":26,"length":35}]}],"entityMap":{"0":{"type":"LINK","mutability":"MUTABLE","data":{"url":"https://gitcode.com/Morse_Chen/Python_machine_learning"}},"1":{"type":"LINK","mutability":"MUTABLE","data":{"url":"https://cloud.tencent.com/developer/article/2490791"}},"2":{"type":"LINK","mutability":"MUTABLE","data":{"url":"https://cloud.tencent.com/developer/article/2490776"}},"3":{"type":"LINK","mutability":"MUTABLE","data":{"url":"https://cloud.tencent.com/developer/article/2490776"}},"4":{"type":"IMAGE","mutability":"IMMUTABLE","data":{"name":null,"blockWidth":1479,"blockHeight":483,"imageUrl":"https://developer.qcloudimg.com/http-save/yehe-11457362/240b595d4c80e69673e4490b2a24fb60.png"}},"5":{"type":"IMAGE","mutability":"IMMUTABLE","data":{"name":null,"blockWidth":1479,"blockHeight":873,"imageUrl":"https://developer.qcloudimg.com/http-save/yehe-11457362/dde28bd6f3bd359d71bd86b6e78f6338.png"}},"6":{"type":"LINK","mutability":"MUTABLE","data":{"url":"https://movielens.org/"}},"7":{"type":"IMAGE","mutability":"IMMUTABLE","data":{"name":null,"blockWidth":1252,"blockHeight":425,"imageUrl":"https://developer.qcloudimg.com/http-save/yehe-11457362/b558cc7eb01e06e8c846c6816ae32449.png"}},"8":{"type":"IMAGE","mutability":"IMMUTABLE","data":{"name":null,"blockWidth":1371,"blockHeight":440,"imageUrl":"https://developer.qcloudimg.com/http-save/yehe-11457362/ae6f61935c8d95571420fa5a5ee7e819.png"}},"9":{"type":"IMAGE","mutability":"IMMUTABLE","data":{"name":null,"blockWidth":684,"blockHeight":79,"imageUrl":"https://developer.qcloudimg.com/http-save/yehe-11457362/bd703f2ddab06be1b0b6a6fd37bcd661.png"}},"10":{"type":"IMAGE","mutability":"IMMUTABLE","data":{"name":null,"blockWidth":1479,"blockHeight":171,"imageUrl":"https://developer.qcloudimg.com/http-save/yehe-11457362/1b9936a32569625ca4ae18f2408dac47.png"}},"11":{"type":"IMAGE","mutability":"IMMUTABLE","data":{"name":null,"blockWidth":256,"blockHeight":80,"imageUrl":"https://developer.qcloudimg.com/http-save/yehe-11457362/ec7274cf206431985810125b8e23e4be.png"}},"12":{"type":"LINK","mutability":"MUTABLE","data":{"url":"https://cloud.tencent.com/developer/article/2490776"}},"13":{"type":"IMAGE","mutability":"IMMUTABLE","data":{"name":null,"blockWidth":1245,"blockHeight":127,"imageUrl":"https://developer.qcloudimg.com/http-save/yehe-11457362/1b252155ad71fe9dc4617579aa5b1dda.png"}},"14":{"type":"IMAGE","mutability":"IMMUTABLE","data":{"name":null,"blockWidth":1087,"blockHeight":448,"imageUrl":"https://developer.qcloudimg.com/http-save/yehe-11457362/863daa24535df7f33075f94c952ade01.png"}},"15":{"type":"LINK","mutability":"MUTABLE","data":{"url":"https://cloud.tencent.com/developer/article/2490796"}},"16":{"type":"LINK","mutability":"MUTABLE","data":{"url":"https://pan.quark.cn/s/0f31109b2b13"}}}},"createTime":1737559138,"ext":{"closeTextLink":0,"comment_ban":0,"description":"","focusRead":0},"favNum":0,"isOriginal":0,"likeNum":0,"pic":"https://developer.qcloudimg.com/http-save/yehe-11457362/240b595d4c80e69673e4490b2a24fb60.png","plain":"  从本文开始,我们介绍参数化模型中的非线性模型。在前几篇文章中,我们介绍了线性回归与逻辑斯谛回归模型。这两个模型都有一个共同的特征:包含线性预测因子\n\\boldsymbol\\theta^\\mathrm{T}\\boldsymbol x\n。将该因子看作\n\\boldsymbol x\n的函数,如果输入\n\\boldsymbol x\n变为原来的\n\\lambda\n倍,那么输出为 \n\\boldsymbol\\theta^\\mathrm{T}(\\lambda \\boldsymbol x) = \\lambda \\boldsymbol\\theta^\\mathrm{T} \\boldsymbol x\n,也变成原来的\n\\lambda\n倍。在逻辑斯谛回归的扩展阅读中,我们将这一类模型都归为广义线性模型。然而,此类模型所做的线性假设在许多任务上并不适用,我们需要其他参数假设来导出更合适的模型。本文首先讲解在推荐系统领域很常用的双线性模型(bilinear model)。\n  双线性模型虽然名称中包含“线性模型”,但并不属于线性模型或广义线性模型,其正确的理解应当是“双线性”模型。在数学中,双线性的含义为,二元函数固定任意一个自变量时,函数关于另一个自变量线性。具体来说,二元函数 \nf \\colon \\mathbb{R}^n \\times \\mathbb{R}^m \\to \\mathbb{R}^l\n 是双线性函数,当且仅当对任意 \n\\boldsymbol u, \\boldsymbol v \\in \\mathbb{R}^n, \\boldsymbol s, \\boldsymbol t \\in \\mathbb{R}^m, \\lambda \\in \\mathbb{R}\n 都有:\nf(\\boldsymbol u, \\boldsymbol s + \\boldsymbol t) = f(\\boldsymbol u, \\boldsymbol s) + f(\\boldsymbol u, \\boldsymbol t)\nf(\\boldsymbol u, \\lambda \\boldsymbol s) = \\lambda f(\\boldsymbol u, \\boldsymbol s)\nf(\\boldsymbol u + \\boldsymbol v, \\boldsymbol s) = f(\\boldsymbol u, \\boldsymbol s) + f(\\boldsymbol v, \\boldsymbol s)\nf(\\lambda \\boldsymbol u, \\boldsymbol s) = \\lambda f(\\boldsymbol u, \\boldsymbol s)\n  最简单的双线性函数的例子是向量内积 \n\\langle \\cdot, \\cdot \\rangle\n,我们按定义验证前两条性质:\n\\small\\langle \\boldsymbol u, \\boldsymbol s + \\boldsymbol t \\rangle = \\sum_i u_i(s_i+t_i) = \\sum_i(u_is_i + u_it_i) = \\sum_i u_is_i + \\sum_i u_it_i = \\langle \\boldsymbol u,\\boldsymbol s \\rangle + \\langle \\boldsymbol u, \\boldsymbol t\\rangle\n\\small\\langle \\boldsymbol u, \\lambda \\boldsymbol s \\rangle = \\sum_i u_i(\\lambda s_i) = \\lambda \\sum_i u_is_i = \\lambda \\langle \\boldsymbol u, \\boldsymbol s \\rangle\n后两条性质由对称性,显然也是成立的。而向量的加法就不是双线性函数。虽然加法满足第1、3条性质,但对第2条,如果 \n\\boldsymbol u \\neq \\boldsymbol 0\n 且 \n\\lambda\\neq 1\n,则有\n\\boldsymbol u + \\lambda \\boldsymbol s \\neq \\lambda (\\boldsymbol u + \\boldsymbol s)\n  与线性模型类似,双线性模型并非指模型整体具有双线性性质,而是指其包含双线性因子。该特性赋予模型拟合一些非线性数据模式的能力,从而得到更加精准预测性能。接下来,我们以推荐系统场景为例,介绍两个基础的双线性模型:矩阵分解模型和因子分解机。\n一、矩阵分解\n 矩阵分解(matrix factorization,MF)是推荐系统中评分预测(rating prediction)的常用模型,其任务为根据用户和商品已有的评分来预测用户对其他商品的评分。为了更清晰地解释MF模型的任务场景,我们以用户对电影的评分为例进行详细说明。如图1所示,设想有\nN\n个用户和\nM\n部电影,每个用户对一些电影按自己的喜好给出了评分。现在,我们的目标是需要为用户从他没有看过的电影中,向他推荐几部他最有可能喜欢看的电影。理想情况下,如果这个用户对所有电影都给出了评分,那么这个任务就变为从已有评分的电影中进行推荐——直接按照用户打分的高低排序。但实际情况下,在浩如烟海的电影中,用户一般只对很小一部分电影做了评价。因此,我们需要从用户已经做出的评价中推测用户为其他电影的打分,再将电影按推测的打分排序,从中选出最高的几部推荐给该用户。\n\n 图1 用户对电影的评分矩阵 \n  我们继续从生活经验出发来思考这一问题。假设某用户为一部电影打了高分,那么可以合理猜测,该用户喜欢这部电影的某些特征。例如,电影的类型是悬疑、爱情、战争或是其他种类;演员、导演和出品方分别是哪些;叙述的故事发生在什么年代;时长是多少,等等。假如我们有一个电影特征库,可以将每部电影用一个特征向量表示。向量的每一维代表一种特征,值代表电影具有这一特征的程度。同时,我们还可以构建一个用户画像库,包含每个用户更偏好哪些类型的特征,以及偏好的程度。假设特征的个数是\nd\n,那么所有电影的特征构成的矩阵是 \n\\boldsymbol P \\in \\mathbb{R}^{M \\times d}\n,用户喜好构成的矩阵是 \n\\boldsymbol Q \\in \\mathbb{R}^{N \\times d}\n。图2给出了两个矩阵的示例。\n\n 图2 电影和用户的隐变量矩阵 \n  需要说明的是,我们实际上分解出的矩阵只是某种交互结果背后的隐变量,并不一定对应真实的特征。这样,我们就把一个用户与电影交互的矩阵拆分成了用户、电影两个矩阵,并且这两个矩阵中包含了更多的信息。最后,用这两个矩阵的乘积 \n\\boldsymbol R = \\boldsymbol P^\\mathrm{T} \\boldsymbol Q\n 可以还原出用户对电影的评分。即使用户对某部电影并没有打分,我们也能通过矩阵乘积,根据用户喜欢的特征和该电影具有的特征,预测出用户对电影的喜好程度。\n 小故事\n   矩阵分解和下面要介绍的因子分解机都属于推荐系统(recommender system)领域的算法。我们在日常使用软件、浏览网站的时候,软件或网站会记录下来我们感兴趣的内容,并在接下来更多地为我们推送同类型的内容。例如,如果我们在购物网站上浏览过牙刷,它就可能再给我们推荐牙刷、毛巾、脸盆等等相关性比较大的商品,这就是推荐系统的作用。推荐系统希望根据用户的特征、商品的特征、用户和商品的交互历史,为用户做出更符合个人喜好的个性化推荐,提高用户的浏览体验,同时为公司带来更高的经济效益。\n   机器学习界开始大量关注推荐系统任务源自美国奈飞电影公司(Netflix)于2006年举办的世界范围的推荐系统算法大赛。该比赛旨在探寻一种算法能更加精确地预测48万名用户对1.7万部电影的打分,如果某个参赛队伍给出的评分预测精度超过了基线算法10%,就可以获得100万美元的奖金。该竞赛在1年来吸引了来自全球186个国家的超过4万支队伍的参加,经过3年的“马拉松”竞赛,最终由一支名为BellKor’s Pragmatic Chaos的联合团队摘得桂冠。而团队中时任雅虎研究员的耶胡达·科伦(Yehuda Koren)则在后来成为了推荐系统领域最为著名的科学家之一,他使用的基于矩阵分解的双线性模型则成为了那个时代推荐系统的主流模型。\n \n  实际上,我们通常能获取到的并不是\n\\boldsymbol P\n和\n\\boldsymbol Q\n,而是打分的结果\n\\boldsymbol R\n。并且由于一个用户只会对极其有限的一部分电影打分,矩阵\n\\boldsymbol R\n是非常稀疏的,绝大多数元素都是空白。因此,我们需要从\n\\boldsymbol R\n有限的元素中推测出用户的喜好\n\\boldsymbol P\n和电影的特征\n\\boldsymbol Q\n。MF模型利用矩阵分解的技巧完成了这一任务。设第\ni\n个用户的偏好向量是\n\\boldsymbol p_i\n,第\nj\n部电影的特征向量是\n\\boldsymbol q_j\n,其维度都是特征数\nd\n。MF假设用户\ni\n对电影\nj\n的评分\nr_{ij}\n是用户偏好与电影特征的内积,即 \nr_{ij} = \\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j\n。在本文开始已经讲过,向量内积是双线性函数,这也是MF模型属于双线性模型的原因。\n  既然MF的目标是通过特征还原评分矩阵\n\\boldsymbol R\n,我们就以还原结果和\n\\boldsymbol R\n中已知部分的差距作为损失函数。记 \nI_{ij} = \\mathbb{I}(r_{ij}\\text{存在})\n,即当用户为电影打过分时\nI_{ij}\n为\n1\n,否则为\n0\n。那么损失函数可以写为\nJ(\\boldsymbol P, \\boldsymbol Q) = \\sum_{i=1}^N\\sum_{j=1}^M I_{ij}\\mathcal{L}(\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j, r_{ij})\n 式中,\n\\mathcal{L}(\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j, r_{ij})\n 是模型预测和真实值之间的损失。一般情况下,我们就选用最简单的MSE作为损失,那么优化目标为\n\\min_{\\boldsymbol P, \\boldsymbol Q} J(\\boldsymbol P, \\boldsymbol Q) = \\frac12\\sum_{i=1}^N\\sum_{j=1}^M I_{ij} (\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j - r_{ij})^2\n 再加入对\n\\boldsymbol P\n和\n\\boldsymbol Q\n的\nL_2\n正则化约束,就得到总的优化目标:\n\\min_{\\boldsymbol P, \\boldsymbol Q} J(\\boldsymbol P, \\boldsymbol Q) = \\frac12\\sum_{i=1}^N\\sum_{j=1}^M I_{ij} \\left((\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j - r_{ij})^2 + \\lambda(\\|\\boldsymbol p_i\\|^2 +\\\\|\\boldsymbol q_j\\|^2)\\right)\n 需要注意,这里的\nL_2\n约束并非对整个矩阵\n\\boldsymbol P\n或者\n\\boldsymbol Q\n而言。我们知道,正则化的目的是通过限制参数的规模来约束模型的复杂度,使模型的复杂度与数据中包含的信息相匹配。以用户为例,假设不同用户直接的评分是独立的。如果用户甲给10部电影打了分,用户乙给2部电影打了分,那么数据中关于甲的信息就比乙多。反映到正则化上,对甲的参数的约束强度也应当比乙大。因此,总损失函数中\n\\boldsymbol p_i\n的正则化系数是\n\\frac{\\lambda}{2}\\sum\\limits_{j=1}^M I_{ij}\n,在\n\\frac{\\lambda}{2}\n的基础上又乘以用户\ni\n评分的数量。对电影向量\n\\boldsymbol q_j\n也是同理。上式对\n\\boldsymbol p_{ik}\n和\n\\boldsymbol q_{jk}\n的梯度分别为\n \n\\begin{aligned} \\nabla_{\\boldsymbol p_{ik}} J(\\boldsymbol P, \\boldsymbol Q) \u0026= I_{ij} \\left((\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j - r_{ij})\\boldsymbol q_{jk} + \\lambda\\boldsymbol p_{ik} \\right) \\\\[1ex] \\nabla_{\\boldsymbol q_{jk}} J(\\boldsymbol P, \\boldsymbol Q) \u0026= I_{ij} \\left((\\boldsymbol p_i^\\mathrm{T}\\boldsymbol q_j - r_{ij})\\boldsymbol p_{ik} + \\lambda\\boldsymbol q_{jk} \\right) \\end{aligned}\n可以发现,上面\n\\boldsymbol p_{ik}\n梯度中含有\n\\boldsymbol q_{jk}\n,而\n\\boldsymbol q_{jk}\n的梯度中含有\n\\boldsymbol p_{ik}\n,两者互相包含,这是由双线性函数的性质决定的,也是双线性模型的一个重要特点。\n二、动手实现矩阵分解\n  下面,我们来动手实现矩阵分解模型。我们选用的数据集是推荐系统中的常用数据集MovieLens,其包含从电影评价网站MovieLens中收集的真实用户对电影的打分信息。简单起见,我们采用其包含来自943个用户对1682部电影的10万条样本的版本MovieLens-100k。我们对原始的数据进行了一些处理,现在数据集的每一行有3个数,依次表示用户编号\ni\n、电影编号\nj\n、用户对电影的打分\nr_{ij}\n,其中 \n1\\le r_{ij}\\le5\n 且三者都是整数。表1展示了数据集movielens_100k.csv中的3个样本,大家也可以从网站上下载更大的数据集,测试模型的预测效果。\n\n 表1 MovieLens-100k数据集示例 \n用户编号\t电影编号\t评分\n196\t242\t3\n186\t302\t3\n22\t377\t1\n\n!pip install tqdm\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom tqdm import tqdm # 进度条工具\n\ndata = np.loadtxt('movielens_100k.csv', delimiter=',', dtype=int)\nprint('数据集大小:', len(data))\n# 用户和电影都是从1开始编号的,我们将其转化为从0开始\ndata[:, :2] = data[:, :2] - 1\n\n# 计算用户和电影数量\nusers = set()\nitems = set()\nfor i, j, k in data:\n users.add(i)\n items.add(j)\nuser_num = len(users)\nitem_num = len(items)\nprint(f'用户数:{user_num},电影数:{item_num}')\n\n# 设置随机种子,划分训练集与测试集\nnp.random.seed(0)\n\nratio = 0.8\nsplit = int(len(data) * ratio)\nnp.random.shuffle(data)\ntrain = data[:split]\ntest = data[split:]\n\n# 统计训练集中每个用户和电影出现的数量,作为正则化的权重\nuser_cnt = np.bincount(train[:, 0], minlength=user_num)\nitem_cnt = np.bincount(train[:, 1], minlength=item_num)\nprint(user_cnt[:10])\nprint(item_cnt[:10])\n\n# 用户和电影的编号要作为下标,必须保存为整数\nuser_train, user_test = train[:, 0], test[:, 0]\nitem_train, item_test = train[:, 1], test[:, 1]\ny_train, y_test = train[:, 2], test[:, 2]\n  然后,我们将MF模型定义成类,在其中实现梯度计算方法。根据上面的推导,模型的参数是用户喜好 \n\\boldsymbol P \\in \\mathbb{R}^{N\\times d}\n 和电影特征 \n\\boldsymbol Q \\in \\mathbb{R}^{M \\times d}\n,其中特征数\nd\n是我们自己指定的超参数。在参数初始化部分,考虑到最终电影的得分都是正数,我们将参数都初始化为1。\nclass MF:\n \n def __init__(self, N, M, d):\n # N是用户数量,M是电影数量,d是特征维度\n # 定义模型参数\n self.user_params = np.ones((N, d))\n self.item_params = np.ones((M, d))\n \n def pred(self, user_id, item_id):\n # 预测用户user_id对电影item_id的打分\n # 获得用户偏好和电影特征\n user_param = self.user_params[user_id]\n item_param = self.item_params[item_id]\n # 返回预测的评分\n rating_pred = np.sum(user_param * item_param, axis=1)\n return rating_pred\n \n def update(self, user_grad, item_grad, lr):\n # 根据参数的梯度更新参数\n self.user_params -= lr * user_grad\n self.item_params -= lr * item_grad\n  下面定义训练函数,以SGD算法对MF模型的参数进行优化。对于回归任务来说,我们仍然以MSE作为损失函数,RMSE作为的评价指标。在训练的同时,我们将其记录下来,供最终绘制训练曲线使用。\ndef train(model, learning_rate, lbd, max_training_step, batch_size):\n train_losses = []\n test_losses = []\n batch_num = int(np.ceil(len(user_train) / batch_size))\n with tqdm(range(max_training_step * batch_num)) as pbar:\n for epoch in range(max_training_step):\n # 随机梯度下降\n train_rmse = 0\n for i in range(batch_num):\n # 获取当前批量\n st = i * batch_size\n ed = min(len(user_train), st + batch_size)\n user_batch = user_train[st: ed]\n item_batch = item_train[st: ed]\n y_batch = y_train[st: ed]\n # 计算模型预测\n y_pred = model.pred(user_batch, item_batch)\n # 计算梯度\n P = model.user_params\n Q = model.item_params\n errs = y_batch - y_pred\n P_grad = np.zeros_like(P)\n Q_grad = np.zeros_like(Q)\n for user, item, err in zip(user_batch, item_batch, errs):\n P_grad[user] = P_grad[user] - err * Q[item] + lbd * P[user]\n Q_grad[item] = Q_grad[item] - err * P[user] + lbd * Q[item]\n model.update(P_grad / len(user_batch), Q_grad / len(user_batch), learning_rate)\n \n train_rmse += np.mean(errs ** 2)\n # 更新进度条\n pbar.set_postfix({\n 'Epoch': epoch,\n 'Train RMSE': f'{np.sqrt(train_rmse / (i + 1)):.4f}',\n 'Test RMSE': f'{test_losses[-1]:.4f}' if test_losses else None\n })\n pbar.update(1)\n\n # 计算 RMSE 损失\n train_rmse = np.sqrt(train_rmse / len(user_train))\n train_losses.append(train_rmse)\n y_test_pred = model.pred(user_test, item_test)\n test_rmse = np.sqrt(np.mean((y_test - y_test_pred) ** 2))\n test_losses.append(test_rmse)\n \n return train_losses, test_losses\n  最后,我们定义超参数,并实现MF模型的训练部分,并将损失随训练的变化曲线绘制出来。\n# 超参数\nfeature_num = 16 # 特征数\nlearning_rate = 0.1 # 学习率\nlbd = 1e-4 # 正则化强度\nmax_training_step = 30\nbatch_size = 64 # 批量大小\n\n# 建立模型\nmodel = MF(user_num, item_num, feature_num)\n# 训练部分\ntrain_losses, test_losses = train(model, learning_rate, lbd, max_training_step, batch_size)\n\nplt.figure()\nx = np.arange(max_training_step) + 1\nplt.plot(x, train_losses, color='blue', label='train loss')\nplt.plot(x, test_losses, color='red', ls='--', label='test loss')\nplt.xlabel('Epoch')\nplt.ylabel('RMSE')\nplt.legend()\nplt.show()\n  为了直观地展示模型效果,我们输出一些模型在测试集中的预测结果与真实结果进行对比。上面我们训练得到的模型在测试集上的RMSE大概是1左右,所以这里模型预测的评分与真实评分大致也差1。\ny_test_pred = model.pred(user_test, item_test)\nprint(y_test_pred[:10]) # 把张量转换为numpy数组\nprint(y_test[:10])\n三、因子分解机\n  本节我们介绍推荐系统中用户行为预估的另一个常用模型:因子分解机(factorization machines,FM)。FM的应用场景与MF有一些区别,MF的目标是从交互的结果中计算出用户和物品的特征;而FM则正好相反,希望通过物品的特征和某个用户点击这些物品的历史记录,预测该用户点击其他物品的概率,即点击率(click through rate,CTR)。由于被点击和未被点击是一个二分类问题,CTR预估可以用逻辑斯谛回归模型来解决。在逻辑斯谛回归中,线性预测子\n\\boldsymbol\\theta^\\mathrm{T} \\boldsymbol x\n为数据中的每一个特征\nx_i\n赋予权重\n\\theta_i\n,由此来判断数据的分类。然而,这样的线性参数化假设中,输入的不同特征\nx_i\n与\nx_j\n之间并没有运算,相当于假设不同特征之间是独立的。而在现实中,输入数据的不同特征之间有可能存在关联。例如,假设我们将一张照片中包含的物品作为其特征,那么“红灯笼”与“对联”这两个特征就很可能不是独立的,因为它们都是与春节相关联的意象。因此,作为对线性的逻辑斯谛回归模型的改进,我们进一步引入双线性部分,将输入的不同特征之间的联系也考虑进来。改进后的预测函数为\n\\hat y(\\boldsymbol x) = \\theta_0 + \\sum_{i=1}^d \\theta_i x_i + \\sum_{i=1}^{d-1}\\sum_{j=i+1}^d w_{ij}x_ix_j\n 其中,\n\\theta_0\n是常数项,\nw_{ij}\n是权重。上式的第二项将所有不同特征\nx_i\n与\nx_j\n相乘,从而可以通过权重\nw_{ij}\n调整特征组合\n(i,j)\n对预测结果的影响。将上式改写为向量形式,为:\n\\hat y(\\boldsymbol x) = \\theta_0 + \\boldsymbol\\theta^\\mathrm{T} \\boldsymbol x + \\frac12 \\boldsymbol x^\\mathrm{T} \\boldsymbol W \\boldsymbol x\n 式中,矩阵\n\\boldsymbol W\n是对称的,即 \nw_{ij} = w_{ji}\n。此外,由于我们已经考虑了单独特征的影响,所以不需要将特征与其自身进行交叉,引入\nx_i^2\n项,从而\n\\boldsymbol W\n的对角线上元素都为\n0\n。大家可以自行验证,形如 \nf(\\boldsymbol x, \\boldsymbol y) = \\boldsymbol x^\\mathrm{T} \\boldsymbol A \\boldsymbol y\n 的函数是双线性函数。双线性模型由于考虑了不同特征之间的关系,理论上比线性模型要更准确。然而,在实际应用中,该方法面临着稀疏特征的挑战。\n  在用向量表示某一事物的离散特征时,一种常用的方法是独热编码(one-hot encoding)。这一方法中,向量的每一维都对应特征的一种取值,样本所具有的特征所在的维度值为1,其他维度为0。如图3所示,某物品的产地是北京、上海、广州、深圳其中之一,为了表示该物品的产地,我们将其编码为4维向量,4个维度依次对应产地北京、上海、广州、深圳。当物品产地为北京时,其特征向量就是\n(1,0,0,0)\n;物品产地为上海时,其特征向量就是\n(0,1,0,0)\n。如果物品有多个特征,就把每个特征编码成的向量依次拼接起来,形成多域独热编码(multi-field one-hot encoding)。假如某种食品产地是上海、生产日期在2月份、食品种类是乳制品,那么它的编码就如图3所示。\n\n 图3 多域独热编码示意 \n  像这样的独热特征向量往往维度非常高,但只有少数几个位置是1,其他位置都是0,稀疏程度很高。当我们训练上述的模型时,需要对参数\nw_{ij}\n求导,结果为 \n\\displaystyle \\frac{\\partial \\hat y}{\\partial w_{ij}} = x_ix_j\n。由于特征向量的稀疏性,大多数情况下都有 \nx_ix_j=0\n,无法对参数\nw_{ij}\n进行更新。为了解决这一问题,Steffen Rendle提出了因子分解机模型。该方法将权重矩阵\n\\boldsymbol W\n分解成 \n\\boldsymbol W = \\boldsymbol V \\boldsymbol V^\\mathrm{T}\n,其中 \n\\boldsymbol V \\in \\mathbb{R}^{d \\times k}\n。根据矩阵分解的相关理论,当\n\\boldsymbol W\n满足某些性质且\nk\n足够大时,我们总可以找到分解矩阵\n\\boldsymbol V\n。即使条件不满足,我们也可以用近似分解 \n\\boldsymbol W \\approx \\boldsymbol V\\boldsymbol V^\\mathrm{T}\n来代替。设\n\\boldsymbol V\n的行向量是 \n\\boldsymbol v_1, \\ldots, \\boldsymbol v_d\n,也即是对每个特征\nx_i\n配一个\nk\n维实数向量\n\\boldsymbol v_i\n,用矩阵乘法直接计算可以得到 \nw_{ij} = \\langle \\boldsymbol v_i, \\boldsymbol v_j \\rangle\n,这样模型的预测函数可以写为\n\\hat y(\\boldsymbol x) = \\theta_0 + \\boldsymbol\\theta^\\mathrm{T} \\boldsymbol x + \\sum_{i=1}^{d-1}\\sum_{j=i+1}^d \\langle \\boldsymbol v_i, \\boldsymbol v_j \\rangle x_ix_j\n 此时,再对参数\n\\boldsymbol v_s\n求梯度的结果为\n \n\\begin{aligned} \\nabla_{\\boldsymbol v_s} \\hat y \u0026= \\nabla_{\\boldsymbol v_s} \\left(\\sum_{i=1}^{d-1}\\sum_{j=i+1}^d \\langle \\boldsymbol v_i, \\boldsymbol v_j \\rangle x_ix_j \\right) \\\\ \u0026= \\nabla_{\\boldsymbol v_s} \\left( \\sum_{j=s+1}^d \\langle \\boldsymbol v_s, \\boldsymbol v_j\\rangle x_sx_j + \\sum_{i=1}^{s-1} \\langle \\boldsymbol v_i, \\boldsymbol v_s \\rangle x_ix_s \\right) \\\\ \u0026= x_s \\sum_{j=s+1}^d x_j\\boldsymbol v_j + x_s \\sum_{i=1}^{s-1} x_i \\boldsymbol v_i \\\\ \u0026= x_s \\sum_{i=1}^d x_i \\boldsymbol v_i - x_s^2 \\boldsymbol v_s \\end{aligned}\n  上面的计算过程中,为了简洁,我们采用了不太严谨的写法,当 \ns=1\n 或 \ns=d\n 时会出现求和下界大于上界的情况。此时我们规定求和的结果为零。如果要完全展开,只需要做类似于 \n\\sum\\limits_{j=s+1}^d \\langle \\boldsymbol v_s, \\boldsymbol v_j \\rangle x_sx_j\n 变为 \n\\sum\\limits_{j=s}^d \\langle \\boldsymbol v_s, \\boldsymbol v_j \\rangle x_sx_j - \\langle \\boldsymbol v_s, \\boldsymbol v_s \\rangle x_s^2\n 的裂项操作即可。从该结果中可以看出,只要 \nx_s \\neq 0\n,参数\n\\boldsymbol v_s\n的梯度就不为零,可以用梯度相关的算法对其更新。因此,即使特征向量\n\\boldsymbol x\n非常稀疏,FM模型也可以正常进行训练。\n  至此,我们的模型还存在一个问题。双线性模型考虑不同特征之间乘积的做法,虽然提升了模型的能力,但也引入了额外的计算开销。对一个样本来说,线性模型需要计算\n\\boldsymbol\\theta^\\mathrm{T} \\boldsymbol x\n,时间复杂度为\nO(d)\n;而我们的模型需要计算每一对特征\n(x_i,x_j)\n的乘积,以及参数\n\\boldsymbol v_i\n与\n\\boldsymbol v_j\n的内积,时间复杂度为\nO(kd^2)\n。上面已经讲过,多热编码的特征向量维度常常特别高,因此这一时间开销是相当巨大的。但是,我们可以对改进后的预测函数 \n\\hat y(\\boldsymbol x) = \\theta_0 + \\sum\\limits_{i=1}^d \\theta_i x_i + \\sum\\limits_{i=1}^{d-1}\\sum\\limits_{j=i+1}^d w_{ij}x_ix_j\n 中的最后一项做一些变形,改变计算顺序来降低时间复杂度。变形方式如下:\n \n\\begin{aligned} \\sum_{i=1}^{d-1}\\sum_{j=i+1}^d \\langle \\boldsymbol v_i, \\boldsymbol v_j \\rangle x_ix_j \u0026= \\frac{1}{2} \\left(\\sum_{i=1}^d\\sum_{j=1}^d \\langle \\boldsymbol v_i, \\boldsymbol v_j \\rangle x_ix_j - \\sum_{i=1}^d \\langle \\boldsymbol v_i, \\boldsymbol v_i \\rangle x_i^2 \\right) \\\\ \u0026= \\frac{1}{2} \\left(\\sum_{i=1}^d\\sum_{j=1}^d \\langle x_i \\boldsymbol v_{i}, x_j \\boldsymbol v_{j} \\rangle - \\sum_{i=1}^d \\langle x_i \\boldsymbol v_i, x_i \\boldsymbol v_i \\rangle \\right) \\\\ \u0026= \\frac12 \\left\\langle \\sum_{i=1}^d x_i \\boldsymbol v_i, \\sum_{j=1}^d x_j \\boldsymbol v_j\\right\\rangle - \\frac12 \\sum_{i=1}^d \\langle x_i \\boldsymbol v_i, x_i \\boldsymbol v_i \\rangle \\\\ \u0026= \\frac12 \\sum_{l=1}^k \\left(\\sum_{i=1}^d v_{il}x_i \\right)^2 - \\frac12 \\sum_{l=1}^k \\sum_{i=1}^d v_{il}^2x_i^2 \\end{aligned}\n  在变形的第二步和第三步,我们利用了向量内积的双线性性质,将标量\nx_i, x_j\n以及求和都移到内积中去。最后的结果中只含有两重求和,外层为\nk\n次,内层为\nd\n次,因此整体的时间复杂度为\nO(kd)\n。这样,FM的时间复杂度关于特征规模\nd\n的增长从平方变为线性,得到了大幅优化。至此,FM的预测公式为\n\\hat y(\\boldsymbol x) = \\theta_0 + \\sum_{i=1}^d \\theta_i x_i + \\frac12 \\sum_{l=1}^k \\left(\\left(\\sum_{i=1}^d v_{il}x_i \\right)^2 - \\sum_{i=1}^d v_{il}^2 x_i^2 \\right)\n 如果要做分类任务,只需要再加上softmax函数即可。\n  在上面的模型中,我们只考虑了两个特征之间的组合,因此该FM也被称为二阶FM。如果进一步考虑多个特征的组合,如\nx_ix_jx_k\n,就可以得到高阶的FM模型。由于高阶FM较为复杂,并且也不再是双线性模型,本文在此略去,如果感兴趣可以自行查阅相关资料。\n四、动手实现因子分解机\n  下面,我们来动手实现二阶FM模型。本节采用的数据集是为FM制作的示例数据集fm_dataset.csv,包含了某个用户浏览过的商品的特征,以及用户是否点击过这个商品。数据集的每一行包含一个商品,前24列是其特征,最后一列是0或1,分别表示用户没有或有点击该商品。我们的目标是根据输入特征预测用户在测试集上的行为,是一个二分类问题。我们先导入必要的模块和数据集并处理数据,将其划分为训练集和测试集。\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import metrics # sklearn中的评价指标函数库\nfrom tqdm import tqdm\n\n# 导入数据集\ndata = np.loadtxt('fm_dataset.csv', delimiter=',')\n\n# 划分数据集\nnp.random.seed(0)\nratio = 0.8\nsplit = int(ratio * len(data))\nx_train = data[:split, :-1]\ny_train = data[:split, -1]\nx_test = data[split:, :-1]\ny_test = data[split:, -1]\n# 特征数\nfeature_num = x_train.shape[1]\nprint('训练集大小:', len(x_train))\nprint('测试集大小:', len(x_test))\nprint('特征数:', feature_num)\n  然后,我们将FM模型定义成类。与MF相同,我们在类中实现预测和梯度更新方法。\nclass FM:\n\n def __init__(self, feature_num, vector_dim):\n # vector_dim代表公式中的k,为向量v的维度\n self.theta0 = 0.0 # 常数项\n self.theta = np.zeros(feature_num) # 线性参数\n self.v = np.random.normal(size=(feature_num, vector_dim)) # 双线性参数\n self.eps = 1e-6 # 精度参数\n \n def _logistic(self, x):\n # 工具函数,用于将预测转化为概率\n return 1 / (1 + np.exp(-x))\n\n def pred(self, x):\n # 线性部分\n linear_term = self.theta0 + x @ self.theta\n # 双线性部分\n square_of_sum = np.square(x @ self.v)\n sum_of_square = np.square(x) @ np.square(self.v)\n # 最终预测\n y_pred = self._logistic(linear_term + 0.5 * np.sum(square_of_sum - sum_of_square, axis=1))\n # 为了防止后续梯度过大,对预测值进行裁剪,将其限制在某一范围内\n y_pred = np.clip(y_pred, self.eps, 1 - self.eps)\n return y_pred\n \n def update(self, grad0, grad_theta, grad_v, lr):\n self.theta0 -= lr * grad0\n self.theta -= lr * grad_theta\n self.v -= lr * grad_v\n  对于分类任务,我们仍用MLE作为训练时的损失函数。在测试集上,我们采用AUC作为评价指标。由于我们在逻辑斯谛回归中已经动手实现过AUC,简单起见,这里我们就直接使用sklearn中的函数直接计算AUC。我们用SGD进行参数更新,训练完成后,我们把训练过程中的准确率和AUC绘制出来。\n# 超参数设置,包括学习率、训练轮数等\nvector_dim = 16\nlearning_rate = 0.01\nlbd = 0.05\nmax_training_step = 200\nbatch_size = 32\n\n# 初始化模型\nnp.random.seed(0)\nmodel = FM(feature_num, vector_dim)\n\ntrain_acc = []\ntest_acc = []\ntrain_auc = []\ntest_auc = []\n\nwith tqdm(range(max_training_step)) as pbar:\n for epoch in pbar:\n st = 0\n while st \u003c len(x_train):\n ed = min(st + batch_size, len(x_train))\n X = x_train[st: ed]\n Y = y_train[st: ed]\n st += batch_size\n # 计算模型预测\n y_pred = model.pred(X)\n # 计算交叉熵损失\n cross_entropy = -Y * np.log(y_pred) - (1 - Y) * np.log(1 - y_pred)\n loss = np.sum(cross_entropy)\n # 计算损失函数对y的梯度,再根据链式法则得到总梯度\n grad_y = (y_pred - Y).reshape(-1, 1)\n # 计算y对参数的梯度\n # 常数项\n grad0 = np.sum(grad_y * (1 / len(X) + lbd))\n # 线性项\n grad_theta = np.sum(grad_y * (X / len(X) + lbd * model.theta), axis=0)\n # 双线性项\n grad_v = np.zeros((feature_num, vector_dim))\n for i, x in enumerate(X):\n # 先计算sum(x_i * v_i)\n xv = x @ model.v\n grad_vi = np.zeros((feature_num, vector_dim))\n for s in range(feature_num):\n grad_vi[s] += x[s] * xv - (x[s] ** 2) * model.v[s]\n grad_v += grad_y[i] * grad_vi\n grad_v = grad_v / len(X) + lbd * model.v\n model.update(grad0, grad_theta, grad_v, learning_rate)\n\n pbar.set_postfix({\n '训练轮数': epoch,\n '训练损失': f'{loss:.4f}',\n '训练集准确率': train_acc[-1] if train_acc else None,\n '测试集准确率': test_acc[-1] if test_acc else None\n })\n # 计算模型预测的准确率和AUC\n # 预测准确率,阈值设置为0.5\n y_train_pred = (model.pred(x_train) \u003e= 0.5)\n acc = np.mean(y_train_pred == y_train)\n train_acc.append(acc)\n auc = metrics.roc_auc_score(y_train, y_train_pred) # sklearn中的AUC函数\n train_auc.append(auc)\n\n y_test_pred = (model.pred(x_test) \u003e= 0.5)\n acc = np.mean(y_test_pred == y_test)\n test_acc.append(acc)\n auc = metrics.roc_auc_score(y_test, y_test_pred) \n test_auc.append(auc)\n \nprint(f'测试集准确率:{test_acc[-1]},\\t测试集AUC:{test_auc[-1]}')\n  最后,我们把训练过程中在训练集和测试集上的精确率和AUC绘制出来,观察训练效果。\n# 绘制训练曲线 \nplt.figure(figsize=(13, 5))\nx_plot = np.arange(len(train_acc)) + 1\n\nplt.subplot(121)\nplt.plot(x_plot, train_acc, color='blue', label='train acc')\nplt.plot(x_plot, test_acc, color='red', ls='--', label='test acc')\nplt.xlabel('Epoch')\nplt.ylabel('Accuracy')\nplt.legend()\n\nplt.subplot(122)\nplt.plot(x_plot, train_auc, color='blue', label='train AUC')\nplt.plot(x_plot, test_auc, color='red', ls='--', label='test AUC')\nplt.xlabel('Epoch')\nplt.ylabel('AUC')\nplt.legend()\nplt.show()\n五、拓展:概率矩阵分解\n 概率矩阵分解(probabilistic matrix factorization,PMF)是另一种常用的双线性模型。与矩阵分解模型不同,它对用户给电影的评分\nr_{ij}\n的分布进行了先验假设,认为其满足正态分布:\nr_{ij} \\sim \\mathcal{N}(\\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j, \\sigma^2)\n 其中\n\\sigma^2\n是正态分布的方差,与用户和电影无关。注意,\n\\boldsymbol p_i\n与\n\\boldsymbol q_j\n都是未知的。记 \nI_{ij} = \\mathbb{I}(r_{ij} \\text{存在})\n,即当用户\ni\n对电影\nj\n打过分时 \nI_{ij}=1\n,否则 \nI_{ij}=0\n。再假设不同的评分采样之间互相独立,那么,我们观测到的\n\\boldsymbol R\n出现的概率是\nP(\\boldsymbol R | \\boldsymbol P, \\boldsymbol Q, \\sigma) = \\prod_{i=1}^N\\prod_{j=1}^M p_\\mathcal{N}(r_{ij}| \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j, \\sigma^2)^{I_{ij}}\n 这里,我们用 \np_\\mathcal{N}(x|\\mu,\\sigma^2)\n 表示正态分布 \n\\mathcal{N}(\\mu, \\sigma^2)\n 的概率密度函数,其完整表达式为 \np_\\mathcal{N}(x|\\mu,\\sigma^2) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\text{e}^{-\\frac{(x-\\mu)^2}{2\\sigma^2}}\n 对于那些空缺的\nr_{ij}\n,由于 \nI_{ij}=0\n,\np_\\mathcal{N}(r_{ij}|\\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j, \\sigma^2)^{I_{ij}}=1\n,对连乘没有贡献,最终的概率只由已知部分计算得出。接下来,我们进一步假设用户的喜好\n\\boldsymbol p_i\n和电影的特征\n\\boldsymbol q_j\n都满足均值为\n\\boldsymbol 0\n的正态分布,协方差矩阵分别为\n\\sigma_P^2\\boldsymbol I\n和\n\\sigma_Q^2 \\boldsymbol I\n,即\nP(\\boldsymbol P | \\sigma_P) = \\prod_{i=1}^N p_\\mathcal{N}(\\boldsymbol p_i| \\boldsymbol 0, \\sigma_P^2 \\boldsymbol I), \\quad P(\\boldsymbol Q | \\sigma_Q) = \\prod_{j=1}^M p_\\mathcal{N}(\\boldsymbol q_j | \\boldsymbol 0, \\sigma_Q^2 \\boldsymbol I)\n 根据全概率公式 \nP(X,Y) = P(X|Y)P(Y)\n,并注意到\n\\boldsymbol R\n与\n\\sigma_P, \\sigma_Q\n无关,我们可以计算出\n\\boldsymbol P\n与\n\\boldsymbol Q\n的后验概率为\n \n\\small\\begin{aligned} P(\\boldsymbol P, \\boldsymbol Q | \\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q) \u0026= \\frac{P(\\boldsymbol P, \\boldsymbol Q, \\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q)}{P(\\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q)} \\\\[2ex] \u0026= \\frac{P(\\boldsymbol R | \\boldsymbol P, \\boldsymbol Q, \\sigma)P(\\boldsymbol P, \\boldsymbol Q | \\sigma_P, \\sigma_Q) P(\\sigma, \\sigma_P, \\sigma_Q)}{P(\\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q)} \\\\[2ex] \u0026= C \\cdot P(\\boldsymbol R | \\boldsymbol P, \\boldsymbol Q, \\sigma)P(\\boldsymbol P|\\sigma_P)P(\\boldsymbol Q|\\sigma_Q) \\\\ \u0026= C\\prod_{i=1}^N\\prod_{j=1}^M p_\\mathcal{N}(r_{ij}| \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j, \\sigma^2)^{I_{ij}} \\cdot \\prod_{i=1}^N p_\\mathcal{N}(\\boldsymbol p_i| \\boldsymbol 0, \\sigma_P^2 \\boldsymbol I) \\cdot \\prod_{j=1}^M p_\\mathcal{N}(\\boldsymbol q_j | \\boldsymbol 0, \\sigma_Q^2 \\boldsymbol I) \\end{aligned}\n 其中\nC\n是常数。为了简化这一表达式,我们利用与MLE中相同的技巧,将上式取对数,从而把连乘变为求和:\n \n\\begin{aligned} \\log P(\\boldsymbol P, \\boldsymbol Q | \\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q) \u0026= \\sum_{i=1}^N\\sum_{j=1}^M I_{ij} \\log p_\\mathcal{N}(r_{ij} | \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j, \\sigma^2) + \\sum_{i=1}^N \\log p_\\mathcal{N}(\\boldsymbol p_i| \\boldsymbol 0, \\sigma_P^2 \\boldsymbol I) \\\\ \u0026\\quad+ \\sum_{j=1}^M \\log p_\\mathcal{N}(\\boldsymbol q_j | \\boldsymbol 0, \\sigma_Q^2 \\boldsymbol I) + \\log C \\end{aligned}\n 再代入\np_\\mathcal{N}\n取对数后的表达式\n\\log p_\\mathcal{N}(x|\\mu, \\sigma^2) = -\\frac12 \\log (2\\pi\\sigma^2) - \\frac{(x-\\mu)^2}{2\\sigma^2}\n 计算得到\n \n\\small\\begin{aligned} \\log P(\\boldsymbol P, \\boldsymbol Q | \\boldsymbol R, \\sigma, \\sigma_P, \\sigma_Q) \u0026= -\\frac12 \\log(2\\pi\\sigma^2) \\sum_{i=1}^N\\sum_{j=1}^M I_{ij} - \\frac{1}{2\\sigma^2}\\sum_{i=1}^N\\sum_{j=1}^M I_{ij}(r_{ij} - \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j)^2 \\\\ \u0026\\quad-\\frac{Nd}{2} \\log(2\\pi\\sigma_P^2) - \\frac{1}{2\\sigma_P^2}\\sum_{i=1}^N \\boldsymbol p_i^\\mathrm{T} \\boldsymbol p_i \\\\ \u0026\\quad-\\frac{Md}{2} \\log(2\\pi\\sigma_Q^2) - \\frac{1}{2\\sigma_Q^2}\\sum_{j=1}^M \\boldsymbol q_j^\\mathrm{T} \\boldsymbol q_j + \\log C \\\\ \u0026= -\\frac{1}{\\sigma^2} \\left[\\frac12 \\sum_{i=1}^N\\sum_{j=1}^M I_{ij}(r_{ij} - \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j)^2 + \\frac{\\lambda_P}{2} \\lVert \\boldsymbol P \\lVert_F^2 + \\frac{\\lambda_Q}{2} \\lVert \\boldsymbol Q \\lVert_F^2 \\right] + C_1 \\end{aligned}\n 其中,\n\\lambda_P = \\sigma^2/\\sigma_P^2\n,\n\\lambda_Q = \\sigma^2 / \\sigma_Q^2\n,\nC_1\n是与参数\n\\boldsymbol P\n和\n\\boldsymbol Q\n无关的常数。根据最大似然的思想,我们应当最大化上面计算出的对数概率。因此,定义损失函数为\nJ(\\boldsymbol P, \\boldsymbol Q) = \\frac12 \\sum_{i=1}^N\\sum_{j=1}^M I_{ij}(r_{ij} - \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j)^2 + \\frac{\\lambda_P}{2} \\lVert \\boldsymbol P \\lVert_F^2 + \\frac{\\lambda_Q}{2} \\lVert \\boldsymbol Q \\lVert_F^2\n 于是,最大化对数概率就等价于最小化损失函数\nJ(\\boldsymbol P, \\boldsymbol Q)\n。并且,这一损失函数恰好为目标值\nr_{ij}\n与参数内积\n\\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_i\n之间的平方损失,再加上\nL_2\n正则化的形式。由于向量内积是双线性函数,PMF模型也属于双线性模型的一种。\n  将损失函数对\n\\boldsymbol p_i\n求导,得到\n\\nabla_{\\boldsymbol p_i} J(\\boldsymbol P, \\boldsymbol Q) = \\sum_{j=1}^M I_{ij}(r_{ij} - \\boldsymbol p_i^\\mathrm{T} \\boldsymbol q_j) \\boldsymbol q_j - \\lambda_P \\boldsymbol p_i\n 令梯度为零,解得\n\\boldsymbol p_i = \\left(\\sum_{j=1}^MI_{ij}\\boldsymbol q_j\\boldsymbol q_j^\\mathrm{T} + \\lambda_P \\boldsymbol I\\right)^{-1} \\left(\\sum_{j=1}^M I_{ij}r_{ij}\\boldsymbol q_j\\right)\n  在正则化约束一节中我们讲过,根据矩阵相关的理论,只要\n\\lambda_P\n足够大,上式的第一项逆矩阵就总是存在。同理,对\n\\boldsymbol q_j\n也有类似的结果。因此,我们可以通过如上形式的\nJ(\\boldsymbol P, \\boldsymbol Q)\n来求解参数\n\\boldsymbol P\n与\n\\boldsymbol Q\n。在参数的高斯分布假设下,我们自然导出了带有\nL_2\n正则化的MF模型,这并不是偶然。我们会在概率图模型中进一步阐释其中的原理。\n 附:以上文中的数据集及相关资源下载地址:\n 链接:https://pan.quark.cn/s/0f31109b2b13\n 提取码:gTBK\n ","showReadNum":76,"sourceDetail":null,"sourceType":99,"status":2,"summary":"  从本文开始,我们介绍参数化模型中的非线性模型。在前几篇文章中,我们介绍了线性回归与逻辑斯谛回归模型。这两个模型都有一个共同的特征:包含线性预测因子","tagIds":[10719,17290,17381,17440,10149],"title":"【机器学习-监督学习】双线性模型","uid":11457362,"updateTime":1737559138,"userSummary":"","userUpdateTime":1737559138,"isNewArticle":false},"authorInfo":{"articleNum":0,"avatarUrl":"https://developer.qcloudimg.com/http-save/10011/85f1d5aca973f3d501211269bdba7e3c.jpg","company":"","introduce":"","isProfessionVerified":0,"nickname":"Francek Chen","privilege":1,"title":"","uid":11457362},"authorType":{"isBlogMoveAuthor":1,"isCoCreator":0,"isInternalAuthor":0,"isOriginalAuthor":0},"classify":[{"id":2,"name":"人工智能"}],"columnInfo":{"columnAvatar":"https://cloudcache.tencent-cloud.com/qcloud/developer/images/release/column-icons/14.png","columnDesc":"","columnId":104758,"columnName":"智能大数据分析","createTime":1737456773,"createUid":11457362,"memberNum":1,"showArticleNum":260,"showConcernNum":0},"columnList":[{"columnAvatar":"https://cloudcache.tencent-cloud.com/qcloud/developer/images/release/column-icons/14.png","columnDesc":"","columnId":104758,"columnName":"智能大数据分析","createTime":1737456773,"createUid":11457362,"memberNum":1,"showArticleNum":260,"showConcernNum":0}],"editTime":0,"isTencent":false,"longtailTags":[],"publishTime":1737559138,"sourceDetail":{"blogType":2,"blogUrl":"https://blog.csdn.net/Morse_Chen?type=blog","channelSource":"csdn","originalTime":"2024-08-22","sourceAuthor":"","sourceLink":"https://blog.csdn.net/Morse_Chen/article/details/141286240","wechatNickName":"","wechatUserName":""},"tags":[{"categoryId":3,"createTime":"2018/09/06 18:31:33","groupId":10128,"groupName":"人工智能技术","tagId":10719,"tagName":"监督学习"},{"categoryId":99,"createTime":"2023/03/14 11:34:55","groupId":0,"groupName":"","tagId":17290,"tagName":"函数"},{"categoryId":99,"createTime":"2023/03/14 11:34:56","groupId":0,"groupName":"","tagId":17381,"tagName":"模型"},{"categoryId":99,"createTime":"2023/03/14 11:34:58","groupId":0,"groupName":"","tagId":17440,"tagName":"数据"},{"categoryId":3,"createTime":"2018/05/11 12:00:54","groupId":10128,"groupName":"人工智能技术","tagId":10149,"tagName":"机器学习"}],"textLink":[{"ext":{"categoryId":1019,"categoryName":"通用技术 - 人工智能","desc":"机器学习是一种人工智能的分支,它是指让计算机通过学习数据和模式,从而自动改进和优化算法的能力。简单来说,机器学习是一种让计算机从数据中学习的方法,而不是通过手动编程来实现特定的任务。","kpCount":12,"name":"机器学习","pCategoryId":1002,"termId":1501},"id":3267,"link":"https://cloud.tencent.com/product/ti","sources":[1,2],"text":"机器学习"},{"ext":{"categoryId":1019,"categoryName":"通用技术 - 人工智能","desc":"推荐系统是一种通过分析用户的历史行为、个人喜好、兴趣爱好等数据信息,为用户推荐个性化的产品、服务、信息等的技术系统。推荐系统可以帮助用户快速找到自己喜欢和感兴趣的内容,提高用户的满意度和忠诚度,同时也可以帮助企业提高销售额和用户留存率。","kpCount":10,"name":"推荐系统","pCategoryId":1002,"termId":1764},"id":3479,"link":"https://cloud.tencent.com/developer/techpedia/1764","sources":[2],"text":"推荐系统"}]},"#url:\"/api/tag/products\",params:#tagIds:@10719,17290,17381,17440,10149,,objectType:1,objectId:2490777,,":[{"adActivity":{"id":5738,"lightSpotLabel":"HOT","pageUrl":"https://cloud.tencent.com/act/pro/Featured","priority":1,"startTime":"2023/12/12 17:58:46","title":"精选特惠 拼团嗨购"},"cnName":"腾讯云 TI 平台","desc":"腾讯云 TI 平台(TencentCloud TI Platform)是基于腾讯先进 AI 能力和多年技术经验,面向开发者、政企提供的全栈式人工智能开发服务平台,致力于打通包含从数据获取、数据处理、算法构建、模型训练、模型评估、模型部署、到 AI 应用开发的产业 + AI 落地全流程链路,帮助用户快速创建和部署 AI 应用,管理全周期 AI 解决方案,从而助力政企单位加速数字化转型并促进 AI 行业生态共建。腾讯云 TI 平台系列产品支持公有云访问、私有化部署以及专属云部署。","docURL":"","hasActivity":false,"icon":"https://qccommunity.qcloudimg.com/community/image/product-comment-icon.svg","introURL":"https://cloud.tencent.com/product/ti","name":"ti","productId":0,"shortDesc":"基于腾讯云强大计算能力的一站式机器学习生态服务平台,帮助用户方便地进行模型训练、评估和预测","tagId":0}]},"tdk":{"title":"【机器学习-监督学习】双线性模型-腾讯云开发者社区-腾讯云","keywords":"监督学习,函数,模型,数据,机器学习","description":"  从本文开始,我们介绍参数化模型中的非线性模型。在前几篇文章中,我们介绍了线性回归与逻辑斯谛回归模型。这两个模型都有一个共同的特征:包含线性预测因子"},"meta":{"subject":"通用技术-人工智能技术-监督学习,其他-空类-函数,其他-空类-模型,其他-空类-数据,通用技术-人工智能技术-机器学习","subjectTime":"2025-01-22 23:18:58","articleSource":"B","magicSource":"N","authorType":"Z","productSlug":"ti"},"link":{"canonical":"https://cloud.tencent.com/developer/article/2490777"},"cssName":["Article","DraftMaster","Player"],"rbConfigKeys":["groupQRKeywords"],"directedContent":null,"pvId":"OM6y5UPsk4aKf9zl_jLcL","clientIp":"8.222.208.146","globalAnnounce":{"announceId":35,"content":"参与人人有奖,腾讯云大模型知识引擎×DeepSeek最佳实践有奖征文正在进行中!点击查看活动详情:\u003ca href=\"https://cloud.tencent.com/developer/article/2496399\" target=\"_blank\"\u003ehttps://cloud.tencent.com/developer/article/2496399\u003c/a\u003e\u003cbr/\u003e \n \u003cimg src=\"https://qcloudimg.tencent-cloud.cn/raw/e8d8f79b0ec4658f3274a7d56238daf4.jpg\"/\u003e","title":"大模型知识引擎×DeepSeek实践征文"},"rbConfig":{"groupQRKeywords":{"AI":{"keywords":[],"img":"https://qcloudimg.tencent-cloud.cn/raw/89b22f53dc3d4e0516d0a4f74ab01a30.png"}},"versionUpdateTipList":[{"id":1005,"title":"文章\u0026问答评论现已支持表情","description":"欢迎大家来体验!","start_time":"2025/02/14 00:00:00","end_time":"2025/02/28 23:59:59"}],"navList":[{"text":"学习","menuList":[{"iconName":"article","title":"文章","desc":"技术干货聚集地","href":"/developer/column?from=19154"},{"iconName":"ask","title":"问答","desc":"技术问题讨论区","href":"/developer/ask?from=19155"},{"iconName":"video","title":"视频","desc":"技术视频记录区","href":"/developer/video?from=19156"},{"iconName":"learn","title":"学习中心","desc":"一站式学习平台","href":"/developer/learning"},{"iconName":"lab","title":"腾讯云实验室","desc":"体验腾讯云产品功能","href":"/lab/labslist?from=20154\u0026from_column=20154\u0026channel=c1004\u0026sceneCode=dev"}]},{"text":"活动","menuList":[{"iconName":"living","title":"直播","desc":"技术大咖面对面","href":"/developer/salon?from=19161"},{"iconName":"competition","title":"竞赛","desc":"秀出你的技术影响力","href":"/developer/competition?from=19162"}]},{"text":"专区","menuList":[{"iconName":"https://qccommunity.qcloudimg.com/icons/tm-zone.svg","title":"腾讯云架构师技术同盟交流圈","desc":"架构行家智汇,海量一线案例","href":"/developer/zone/tm"},{"iconName":"https://qcloudimg.tencent-cloud.cn/raw/1deae15bfe2dcdd1036f601852df7dd2.svg","title":"腾讯云数据库专区","desc":"数据智能管理专家","href":"/developer/zone/tencentdb"},{"iconName":"cloudnative","title":"腾讯云原生专区","desc":"助力业务降本增效","href":"/developer/zone/cloudnative?from=19164"},{"iconName":"https://qccommunity.qcloudimg.com/icons/tencenthunyuan.svg","title":"腾讯混元专区","desc":"具备强大的中文创作、逻辑推理、任务执行能力","href":"/developer/zone/tencenthunyuan"},{"iconName":"https://qcloudimg.tencent-cloud.cn/raw/1d60f881ef280ea992e2e4b6490d974b.svg","title":"腾讯云TCE专区","desc":"私有化云解决方案","href":"/developer/zone/tce"},{"iconName":"https://qccommunity.qcloudimg.com/community/image/lighthouse.svg","title":"腾讯云Lighthouse专区","desc":"新一代开箱即用、面向轻量应用场景的云服务器","href":"/developer/zone/lighthouse"},{"iconName":"https://qccommunity.qcloudimg.com/community/image/HAi.svg","title":"腾讯云HAI专区","desc":"提供即插即用的高性能云服务","href":"/developer/zone/hai"},{"iconName":"https://cloudcache.tencent-cloud.com/qcloud/ui/static/static_source_business/b3e1b483-be77-4e08-827f-ef0e5cda26cf.svg","title":"腾讯云Edgeone专区","desc":"下一代CDN—EdgeOne,不止加速","href":"/developer/zone/tencentcloudedgeone"},{"iconName":"https://qccommunity.qcloudimg.com/community/image/cos.svg","title":"腾讯云存储专区","desc":"安全稳定的海量分布式存储服务","href":"/developer/zone/cos"},{"iconName":"https://qccommunity.qcloudimg.com/community/image/ai.svg","title":"腾讯云智能专区","desc":"数实融合,云上智能","href":"/developer/zone/ai"},{"iconName":"https://qccommunity.qcloudimg.com/community/image/ipass.svg","title":"腾讯轻联专区 ","desc":"新一代应用与数据集成平台","href":"/developer/zone/ipaas"},{"iconName":"https://qccommunity.qcloudimg.com/image/cloudbase.svg","title":"腾讯云开发专区","desc":"云原生一体化开发平台","href":"/developer/zone/tencentcloudbase"},{"iconName":"https://qccommunity.qcloudimg.com/image/TAPD.svg","title":"TAPD专区","desc":"让协作更敏捷","href":"/developer/zone/tapd"}]},{"text":"工具","menuList":[{"iconName":"https://qccommunity.qcloudimg.com/icons/ai-assistant.svg","title":"腾讯云AI代码助手","desc":"辅助编码工具,使研发提效增质","href":"/product/acc?from=22178"},{"iconName":"coding","title":"CODING DevOps","desc":"一站式软件研发管理平台","href":"/product/coding?from=20154\u0026from_column=20154"},{"iconName":"studio","title":"Cloud Studio","desc":"随时随地在线协作开发","href":"/product/cloudstudio?from=20154\u0026from_column=20154"},{"iconName":"sdk","title":"SDK中心","desc":"开发者语言与SDK","href":"/document/sdk?from=20154\u0026from_column=20154"},{"iconName":"api","title":"API中心","desc":"API 助力快捷使用云产品","href":"/document/api?from=20154\u0026from_column=20154"},{"iconName":"tool","title":"命令行工具","desc":"可快速调用管理云资源","href":"/document/product/440/6176?from=20154\u0026from_column=20154"}]}],"activity-popup":{"mImgUrl":"https://qccommunity.qcloudimg.com/mp/images/11-11mobile.jpg","imgUrl":"https://qccommunity.qcloudimg.com/mp/images/11-11pc.jpg","beginTime":"2024/10/24 00:00:00","endTime":"2024/10/31 23:59:59"},"header-advertisement":{"imageUrl":"https://qccommunity.qcloudimg.com/image/2024-11-01-18-15.png","link":"https://cloud.tencent.com/act/pro/double11-2024?from=22374\u0026from_column=22374#miaosha"}},"isBot":false,"session":{"isLogined":false,"isQCloudLogined":false,"isQCommunityLogined":false,"isDifferentUin":false}}},"page":"/article/[articleId]","query":{"articleId":"2490777"},"buildId":"9hp3loRhW5K95-TQOzEpv","assetPrefix":"https://qccommunity.qcloudimg.com/community","isFallback":false,"gssp":true,"appGip":true,"scriptLoader":[]}</script></body></html>

Pages: 1 2 3 4 5 6 7 8 9 10