CINXE.COM

NVIDIA Merlin HugeCTR Framework | NVIDIA Developer

<!DOCTYPE html> <html lang='en' class='h-100'> <head> <meta name="viewport" content="width=device-width,initial-scale=1"> <meta name="csrf-param" content="authenticity_token" /> <meta name="csrf-token" content="5frkrz0oA74dknqXevAegTsJWlmml2uX649LC06w0B8sjmXA3XGAO5liaHjgg03vVkM9Hc7iPl8i2rDvncO8BQ" /> <meta name="csp-nonce" /> <title>NVIDIA Merlin HugeCTR Framework | NVIDIA Developer</title> <meta name="description" content="Deep Neural Network (DNN) Training Framework."> <link rel="canonical" href="https://developer.nvidia.com/nvidia-merlin/hugectr"> <link rel="alternate" href="https://developer.nvidia.com/nvidia-merlin/hugectr" hreflang="x-default"> <link rel="alternate" href="https://developer.nvidia.com/nvidia-merlin/hugectr" hreflang="en-us"> <link rel="alternate" href="https://developer.nvidia.cn/nvidia-merlin/hugectr" hreflang="zh-cn"> <meta property="og:site_name" content="NVIDIA Developer"> <meta property="og:title" content="NVIDIA Merlin HugeCTR Framework"> <meta property="og:description" content="Deep Neural Network (DNN) Training Framework."> <meta property="og:type" content="website"> <meta property="og:image" content="https://developer.download.nvidia.com/images/og-default.jpg"> <meta property="og:url" content="https://developer.nvidia.com/nvidia-merlin/hugectr"> <meta name="twitter:title" content="NVIDIA Merlin HugeCTR Framework"> <meta name="twitter:description" content="Deep Neural Network (DNN) Training Framework."> <meta name="twitter:image" content="https://developer.download.nvidia.com/images/og-default.jpg"> <meta name="twitter:site" content="@NVIDIA"> <meta name="twitter:card" content="summary_large_image"> <meta name="twitter:creator" content="@NVIDIA"> <meta property="interest" content="Recommenders / Personalization"> <link rel="stylesheet" href="https://dirms4qsy6412.cloudfront.net/assets/application-850056c0e23225daee0fd1b592d57245911c990e3aefce82212f37ebf18d96de.css" media="all" /> <link rel="stylesheet" href="https://dirms4qsy6412.cloudfront.net/assets/one-trust-bea625cf16a072ce5fdb0707a19f2645daf63c05eb1a016db72773eba008fc07.css" /> <script src="https://cdn.cookielaw.org/scripttemplates/otSDKStub.js" data-document-language="true" type="text/javascript" charset="UTF-8" data-domain-script="3e2b62ff-7ae7-4ac5-87c8-d5949ecafff5"></script> <script type="text/javascript" src="https://images.nvidia.com/aem-dam/Solutions/ot-js/ot-custom.js"></script> <script> function OptanonWrapper() { let event = new Event('bannerLoaded'); window.dispatchEvent(event); if (window.OnetrustActiveGroups && window.OnetrustActiveGroups.includes("C0002")) { window.DD_RUM && window.DD_RUM.init({ clientToken: 'pub0430c74fae5d2b467bcb8d48b13e5b32', applicationId: '9fc963c7-14e6-403d-bdec-ee671550bb7f', site: 'datadoghq.com', service: 'devzone', env: 'production', version: '', sessionSampleRate: 10, sessionReplaySampleRate: 5, trackUserInteractions: true, trackResources: true, trackLongTasks: true, defaultPrivacyLevel: 'mask-user-input', }); } } </script> <link rel="preload" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.4/css/all.min.css" as="style" type="text/css"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.4/css/all.min.css" /> <link rel="stylesheet" href="https://dirms4qsy6412.cloudfront.net/assets/devzone3/vars-cd3a0769a3c2f2d9ea6b83ac53ce86bceef4c719e4dbd22ed41d48d01f200901.css" media="all" /> <link rel="stylesheet" href="https://dirms4qsy6412.cloudfront.net/assets/devzone3/new/application-18e41529317cec7a71ff11ed11f560691cd0843420e9cb6082d8cf8ce8fc638c.css" media="all" /> <link rel="stylesheet" href="https://dirms4qsy6412.cloudfront.net/assets/feed-aggregator/feed-aggregator-9ace7521871242143cb35fa86d5be702c4dacb409600041fa6a5b14fa2a71dde.css" media="all" /> <link rel="stylesheet" href="https://dirms4qsy6412.cloudfront.net/assets/twentytwenty/css/twentytwenty-4ef2ccd719d09a97572e93c499c1fb11cc971d2a3519cfe105dcff2be92f65b9.css" media="all" /> <script src="https://dirms4qsy6412.cloudfront.net/assets/horizontal-chart/d3.v4.min-41cfecdf7c41476e805de7afacf4aacdd1a4be6947fbecf95217e947ebc2faf5.js"></script> <script src="https://dirms4qsy6412.cloudfront.net/assets/horizontal-chart/visualize-d-06443fdef48364af6635f0d1d3535da26910671f6f6a680c531eff0e54ed595f.js"></script> <link rel="stylesheet" href="https://dirms4qsy6412.cloudfront.net/assets/sf-validation/sf-validation-805362e079494cd052f713be5f91a44eb602f545c342f794abbd4a8050c0acb3.css" /> <script src="https://assets.adobedtm.com/5d4962a43b79/c1061d2c5e7b/launch-191c2462b890.min.js" data-ot-ignore="true"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.6.3/jquery.min.js" integrity="sha512-STof4xm1wgkfm7heWqFJVn58Hm3EtS31XFaagaa8VMReCXAkQnJZ+jEy8PCC/iT18dFy95WcExNHFTqLyp72eQ==" crossorigin="anonymous" referrerpolicy="no-referrer"></script> <script src="https://api-prod.nvidia.com/search/nvidia-gallery-widget.js"></script> <script src="https://dirms4qsy6412.cloudfront.net/assets/devzone3/modules/nvidia_editor/nod_widgets-8c38a7d04ed3c3acd9117aa126bf76d7902d3c57c72b76dbf3c281c96ed09975.js"></script> <link rel="icon" type="image/x-icon" href="https://dirms4qsy6412.cloudfront.net/assets/favicon-81bff16cada05fcff11e5711f7e6212bdc2e0a32ee57cd640a8cf66c87a6cbe6.ico" /> </head> <body class='d-flex flex-column h-100' data-theme='devzone3_new'> <div id='header'></div> <main class="main-content dz3-main-section dz-new-theme page-nvidia-merlin-hugectr page-nvidia-merlin-hugectr" data-id="579"> <div id="join-nvd-banner" class="text-white" style="background: rgba(0, 0, 0, 0) linear-gradient(rgb(153, 153, 153) 0%, rgb(102, 102, 102) 100%) repeat scroll 0% 0%; padding: 1em ; color: white;"> <div class="container"> <div class="col-12 text-center" style="text-align: center;"> Join Netflix, Fidelity, and NVIDIA to learn best practices for building, training, and deploying modern recommender systems.    <a target="_blank" href="https://info.nvidia.com/recsys-at-work-best-practices-and-insights.html" class="cta--prim"> Register Free</a> </div> </div> </div> <section class="sct--s"> <div class="cntnr--narrow txt-cntr"> <h1 class="h--large ">NVIDIA Merlin HugeCTR</h1> <div class="cntnr--narrow"> <br> <p class="p--large "><a href="https://developer.nvidia.com/nvidia-merlin">NVIDIA Merlin™</a> accelerates the entire pipeline, from ingesting and training to deploying GPU-accelerated recommender systems. Merlin HugeCTR (Huge Click-Through-Rate) is a deep neural network (DNN) training and inference framework designed for recommender systems. It provides distributed training with model-parallel embedding tables, an embeddings cache, and data-parallel neural networks across multiple GPUs and nodes for maximum performance. HugeCTR covers common and recent architectures such as Deep Learning Recommendation Model (DLRM), Wide and Deep, Deep Cross Network (DCN), and DeepFM.</p> <br><br> <p class="txt-cntr"><strong>Download and Try It Today</strong></p> <div class="row cta-row--cntr"> <a href="https://github.com/NVIDIA/HugeCTR" class="cta--prim cta--l">GitHub Repo</a> <a href="https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-training" class="cta--prim cta--l">Merlin Training on NGC™</a> </div> </div> </div> <br><br> <div class="cntnr--narrow "> <h2 class="h--medium txt-cntr">Merlin HugeCTR Core Features</h2> <div class="cntnr--cw"> <div class="row"> <div class="col-md-6"> <h3 class="h--smaller txt-clr--blck">Training Embeddings at Scale</h3> <p class="content-m">Data scientists and machine learning engineers building deep learning recommenders work with large embedding tables that often exceed available memory. Merlin HugeCTR's model parallelism and embedding cache is designed for recommender workflows. This makes it easy to train an embedding table of any size and fully leverage compute memory. HugeCTR also leverages the <a href="https://developer.nvidia.com/nccl">NVIDIA Collective Communication Library (NCCL)</a> for high-speed, multi-node, and multi-GPU communications at scale.</p> <a target="_blank" class="cta--tert" href="https://developer.nvidia.com/blog/accelerating-recommender-systems-training-with-nvidia-merlin-open-beta/"> Learn more <span class="fas fa-angle-right fa-fw"></span></a> </div> <div class="col-md-6"> <center><img alt="NVIDIA Collective Communication Library (NCCL)" class="img-responsive" src="https://d29g4g2dyqv443.cloudfront.net/sites/default/files/akamai/nvidia-merlin-hugectr-training-at-scale-icon-630x500-2c50-responsive.svg" style="width:50%; padding-top:40px;"></center> </div> </div> </div> <div class="cntnr--cw"> <div class="row"> <div class="col-md-6"> <center><img alt="HugeCTRs embedding layer" class="img-responsive" src="https://d29g4g2dyqv443.cloudfront.net/sites/default/files/akamai/nvidia-merlin-hugectr-neural-icon-630x354-2c50-responsive.svg" style="width:50%; padding-top:40px;"></center> </div> <div class="col-md-6"> <h3 class="h--smaller txt-clr--blck">Inherently Asynchronous, Multi-Threaded Pipeline</h3> <p class="content-m">Effective data loading is challenging for machine learning engineers and data scientists who are continuously experimenting, training, and fine-tuning recommender models. HugeCTR's data reader is inherently asynchronous and multi-threaded. It will read batched data records that are high-dimensional, sparse, or categorical. Each record is fed directly to fully connected layers. HugeCTR's embedding layer compresses input-sparse features to dense-embedding vectors. HugeCTR's model parallelism enables embedded training in a homogeneous cluster across multiple nodes and GPUs.</p> <strong><a target="_blank" class="cta--tert" href="https://github.com/NVIDIA/HugeCTR">Explore HugeCTR on GitHub <span class="fas fa-angle-right fa-fw"></span></a></strong> </div> </div> </div> <br> <div class="cntnr--cw"> <div class="row"> <div class="col-md-6"> <h3 class="h--smaller txt-clr--blck">Inference, Hierarchical Deployment on Multiple GPUs </h3> <p class="content-m">HugeCTR provides concurrent model inference execution across multiple GPUs through the use of a parameter server and embedding cache that are shared between multiple model instances. HugeCTR also leverages <a href="https://developer.nvidia.com/nvidia-triton-inference-server">NVIDIA Triton™ Inference Server</a> to ease workflows for data scientists and machine learning engineers when deploying models to production.</p> <strong><a target="_blank" class="cta--tert" href="https://github.com/triton-inference-server/hugectr_backend/blob/main/docs/architecture.md#hugectr-inference-framework">Learn more <span class="fas fa-angle-right fa-fw"></span></a></strong> </div> <div class="col-md-6"> <center><img alt="NVIDIA Triton™ Inference Server" class="img-responsive" src="https://d29g4g2dyqv443.cloudfront.net/sites/default/files/akamai/nvidia-merlin-hugectr-multiple-gpus-icon-630x354-2c50-responsive.svg" style="width:50%; padding-top:40px;"></center> </div> </div> </div> <br> <div class="cntnr--cw"> <div class="row"> <div class="col-md-6"> <center><img alt="HugeCTR : open-source component of NVIDIA Merlin" class="img-responsive" src="https://d29g4g2dyqv443.cloudfront.net/sites/default/files/akamai/nvidia-merlin-open-source-icon-responsive.svg" style="width:23%; padding-top:40px;"></center> </div> <div class="col-md-6"> <h3 class="h--smaller txt-clr--blck">Interoperability with Open Source</h3> <p class="content-m">Machine learning engineers and data scientists use a hybrid of methods, libraries, tools, and frameworks that often include open-source components. HugeCTR is an open-source component of <a href="https://developer.nvidia.com/nvidia-merlin">NVIDIA Merlin</a> and is designed to optimize embeddings training within recommender workflows. HugeCTR is interoperable with open source and its SOK (SparseOperationsKit) is compatible with TensorFlow Distribute Strategy and Horovod. </p> <strong><a target="_blank" class="cta--tert" href="https://github.com/NVIDIA-Merlin/HugeCTR/tree/master/sparse_operation_kit">Learn more <span class="fas fa-angle-right fa-fw"></span></a></strong> </div> </div> </div> <br> <div class="cntnr--cw"> <div class="row"> <div class="col-md-6"> <h3 class="h--smaller txt-clr--blck">Embeddings Optimization</h3> <p class="content-m">Embeddings optimization enables more experimentation, fine tuning, and better prediction at scale. HugeCTR's optimized embedding implementation is up to 8X more performant than other frameworks’ embedding layers. This optimized implementation is also made available as a TensorFlow plug-in that works seamlessly with TensorFlow and acts as a convenient drop-in replacement for the TensorFlow-native embedding layers.</p> <strong><a target="_blank" class="cta--tert" href="https://developer.nvidia.com/blog/accelerating-embedding-with-the-hugectr-tensorflow-embedding-plugin/">Learn more  <span class="fas fa-angle-right fa-fw"></span></a></strong> </div> <div class="col-md-6"> <center><img alt="HugeCTR: Embeddings optimization" class="img-responsive" src="https://d29g4g2dyqv443.cloudfront.net/sites/default/files/akamai/nvidia-merlin-100px-embeddings-icon(1).svg" style="width:23%; padding-top:40px;"></center> </div> </div> </div> </div> <!--- <section class="sct--m"> <div class="cntnr--narrow "> <h2 class="hdng--m txt-cntr">Merlin HugeCTR Performance </h2> <br> <p class="content-m txt-cntr">Accelerated training at scale with model parallel embeddings across GPUs.</p> <div class="cntnr--cw"> <div class="row"> <div class="col-md-6"> <p class="content-m">MLPerf Training v0.7 included eight different workloads covering a broad range of use cases, including recommenders. NVIDIA’s full-stack approach based on the NVIDIA A100 Tensor Core GPU resulted in the fastest commercially available solution for all workloads, including recommender training. According to MLPerf results, NVIDIA Merlin HugeCTR, coupled with a single NVIDIA DGX™ A100 system, is capable of training a DLRM network on the Criteo 1TB dataset 13.5X faster than a 4×4-node, 16-CPU cluster.</p> <strong><a class="cta--tert" href="https://github.com/NVIDIA/HugeCTR/blob/master/performance.md">Explore more benchmark details <span class="fas fa-angle-right fa-fw"></span></a></strong> </div> <div class="col-md-6"> <h3 class="hdln--s txt-clr--blck">Fastest Commercially Available System on MLPerf Recommender Benchmark</h3> <center><img alt=" MLPerf Recommender Benchmark" class="img-responsive" src="https://d29g4g2dyqv443.cloudfront.net/sites/default/files/akamai/nvidia-merlin-mlperf-dlrm-performance-2c50-p%402x.jpg" style=""/></center> </div> </div> </div> </div> </section> ---> <section class="sct--xs"> <div class="cntnr--cozy txt-cntr"> <h2 class="h--medium">Get Started with Merlin HugeCTR</h2> <p class="content-m">All <a target="_blank" href="https://developer.nvidia.com/nvidia-merlin">NVIDIA Merlin</a> components are available as open-source projects on GitHub. However, a more convenient way to make use of these components is by using Merlin HugeCTR containers from the NVIDIA NGC catalog. <a target="_blank" href="https://developer.nvidia.com/ai-hpc-containers">Containers</a> package the software application, libraries, dependencies, and runtime compilers in a self-contained environment. This way, the application environment is both portable, consistent, reproducible, and agnostic to the underlying host system software configuration.</p> </div> <br> <div class="cntnr--narrow"> <div class="row"> <div class="col-md-4 col-sm-4 txt-cntr"> <h3 class="h--smaller">Merlin Training</h3> <p class="content-m">The NGC container allows users to do preprocessing, feature engineering, and training of a deep learning-based recommender system model with HugeCTR.</p> <div class="cta-row--cntr"> <a class="cta--prim" href="https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-training">Pull Container from NGC</a> </div> </div> <div class="col-md-4 col-sm-4 txt-cntr"> <h3 class="h--smaller">Merlin Inference</h3> <p class="content-m">HugeCTR supports Triton Inference Server to provide GPU-accelerated inference. The NGC container enables users to deploy Merlin <a href="https://developer.nvidia.com/merlin/nvtabular">NVTabular</a> workflows and HugeCTR models to Triton Inference Server for production.</p> <div class="cta-row--cntr"> <a class="cta--prim" href="https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-inference">Pull Container from NGC</a> </div> </div> <div class="col-md-4 col-sm-4 txt-cntr"> <h3 class="h--smaller">HugeCTR on GitHub</h3> <p class="content-m">The GitHub repo helps users get started with HugeCTR and quickly train a model using a Python interface. Available resources include documentation, tutorials, examples, and notebooks.</p> <div class="cta-row--cntr"> <a class="cta--prim" href="https://github.com/NVIDIA/HugeCTR">View GitHub Repo</a> </div> </div> </div> </div> </section> <br> <section class="sct--xs"> <div class="cntnr--cozy txt-cntr"> <h2 class="h--medium">Built on NVIDIA AI</h2> <p class="content-m">NVIDIA AI empowers millions of hands-on practitioners and thousands of companies to use the NVIDIA AI Platform to accelerate their workloads. <a target="_blank" href="https://github.com/NVIDIA-Merlin/Merlin">NVIDIA Merlin</a>, is part of the NVIDIA AI Platform. NVIDIA Merlin was built upon and leverages additional NVIDIA AI software within the platform.</p> </div> <div class="cntnr--narrow txt-cntr"> <div class="row"> <div class="col-md-4"> <h3 class="h--smaller">RAPIDS</h3> <p class="p--medium">RAPIDS is a suite of open source software libraries and APIs that enables end-to-end data science and analytics pipelines entirely on GPUs.</p> <br> <p class="p--medium"><b>Try it Today:</b></p> <a target="_blank" href="https://github.com/rapidsai">GitHub</a> </div> <div class="col-md-4"> <h3 class="h--smaller">cuDF</h3> <p class="p--medium">cuDF i is a Python GPU DataFrame library for loading, joining, aggregating, filtering, and manipulating data.</p> <br> <p class="p--medium"><b>Try it Today:</b></p> <a target="_blank" href="https://github.com/rapidsai/cudf">GitHub</a> </div> <div class="col-md-4"> <h3 class="h--smaller">NVIDIA Triton Inference Server</h3> <p class="p--medium">Take advantage of NVIDIA Triton™ Inference Server to run inference efficiently on GPUs by maximizing throughput with the right combination of latency and GPU utilization.</p> <br> <p class="p--medium"><b>Try it Today:</b></p> <a target="_blank" href="https://github.com/triton-inference-server/server">GitHub</a> </div> </div> </div> </section> <div class="cntnr--cozy txt-cntr"> <h2 class="h--medium">Merlin HugeCTR Resources</h2> <p class="content-m"><a href="https://resources.nvidia.com/l/en-us-merlin">Explore all Merlin resources.</a></p> </div> <div class="cntnr--narrow"> <div class="row"> <div class="col-md-3 col-sm-3"> <div class="card"> <div class="card-cntnt-cntnr"> <div> <h3 class="h--smallest txt-clr--blck">Tencent and Merlin HugeCTR </h3> <p class="content-m">Learn how Tencent deployed their real advertising recommendation training with Merlin and achieved more than 7X speedup over the original TensorFlow solution on the same GPU platform.</p> </div> <br> <br> <br> <br> <a style="font-size:14px;" class="" href="https://www.nvidia.com/en-us/on-demand/session/gtcspring21-s31820/">Watch the On-Demand <br> GTC Session<span class="fas fa-angle-right fa-fw"></span></a> </div> </div> </div> <div class="col-md-3 col-sm-3"> <div class="card"> <div class="card-cntnt-cntnr"> <div> <h3 class="h--smallest txt-clr--blck">GPU Accelerated Recommender Systems Training and Inference</h3> <p class="content-m">In this ACM RecSys 2022 accepted submission, learn about NVIDIA Merlin HugeCTR, a framework for click through rate estimation that is optimized for training and inference. It also enables training at scale with model-parallel embeddings and data-parallel neural networks.</p> </div> <a class="cta--tert" href="https://resources.nvidia.com/en-us-merlin/merlin-paper-huge-ctr?lx=97GH0Q">Explore now <span class="fas fa-angle-right fa-fw"></span></a> </div> </div> </div> <div class="col-md-3 col-sm-3"> <div class="card"> <div class="card-cntnt-cntnr"> <div> <h3 class="h--smallest txt-clr--blck">Best Practices from Tencent</h3> <p class="content-m">Discover insights, advice, and best practices about leading the design and development of Tencent's deep learning recommendations system.</p> </div> <a class="cta--tert" href="https://medium.com/nvidia-merlin/leading-design-and-development-of-the-advertising-recommender-system-at-tencent-an-interview-with-37f1eed898a7">Read interview <span class="fas fa-angle-right fa-fw"></span></a> </div> </div> </div> <div class="col-md-3 col-sm-3"> <div class="card"> <div class="card-cntnt-cntnr"> <div> <h3 class="h--smallest txt-clr--blck">Meituan and Merlin HugeCTR</h3> <p class="content-m">Learn how Meituan optimizes their machine learning platform by building a high-performance deep learning training framework deployed on CPU and GPU clusters.</p> </div> <a class="cta--tert" href="https://medium.com/nvidia-merlin/optimizing-meituans-machine-learning-platform-an-interview-with-jun-huang-7e046143131f">Read interview <span class="fas fa-angle-right fa-fw"></span></a> </div> </div> </div> </div> </div> </section> <section class="sct--xs sct--drk-gry4"> <div class="cntnr--cw"> <p class="content-l text-white txt-cntr">HugeCTR is available to download from the NVIDIA NGC catalog or from the <a target="_blank" href="https://github.com/NVIDIA/HugeCTR">GitHub repository</a>.</p> <center><a class="cta--prim" href="https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-training" role="button" target="blank">Download from NGC</a></center> <br> </div> </section> <script> document.addEventListener('DOMContentLoaded', () => { const allLinks = document.querySelectorAll('.dz3-main-section.dz-new-theme a'); allLinks.forEach((link) => { let hasIcon = link.querySelector('span.fas'); if(hasIcon) { link.classList.add('has-cta-icon'); } }); }); </script> </main> <div id='footer' class='mt-auto'></div> <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script> <script src="https://dirms4qsy6412.cloudfront.net/assets/devzone3/new/popper.min-a9eb3f3101919a18965114cfdcd0138652ec03b2b58cfb26806f9a256564c858.js"></script> <script src="https://dirms4qsy6412.cloudfront.net/assets/feed-aggregator/feed-aggregator-7f147443abc2d1300a239c29e4ba3ca0d0d2eb0dc66b608765e2b3be50e18e10.js"></script> <script src="https://dirms4qsy6412.cloudfront.net/assets/devzone3/new/dist/dz3-new-bundle-11f473650a558402a2733b7bb4d6133e28814892ec0527381c9144f3499b8d60.js"></script> <script src="https://dirms4qsy6412.cloudfront.net/assets/twentytwenty/js/jquery.event.move-16041d2e384b513c1b202af51fc404a0643b8c38ff823bb4326520ad5a82b761.js"></script> <script src="https://dirms4qsy6412.cloudfront.net/assets/twentytwenty/js/jquery.twentytwenty-835622257095d5bd0719a5484d68213ccc8708a321dd3deded777d1e6623b499.js"></script> <script> const template = document.createElement('template'); template.innerHTML = ` <style> @import "https://dirms4qsy6412.cloudfront.net/assets/feed-aggregator/feed-aggregator-9ace7521871242143cb35fa86d5be702c4dacb409600041fa6a5b14fa2a71dde.css"; .feed-aggregator-component .card { box-shadow: 0 4px 5px 0 rgba(0,0,0,0.14), 0 1px 10px 0 rgba(0,0,0,0.12), 0 2px 4px -1px rgba(0,0,0,0.3) !important; } .feed-aggregator-component .card:hover { box-shadow: 0 0 8px 0 rgba(0,0,0,0.13), 0 14px 32px 5px rgba(0,0,0,0.13) !important; } </style> <div class="feed-aggregator-component"></div> `; const hosts = { 'en': 'https://developer.nvidia.com/blog', 'cn': 'https://developer.nvidia.com/zh-cn/blog', } class FeedAggregatorElement extends HTMLElement { constructor() { super(); this._shadowRoot = this.attachShadow({ 'mode': 'open' }); this._shadowRoot.appendChild(template.content.cloneNode(true)); } connectedCallback() { const categories = this.getAttribute('categories'); const tags = this.getAttribute('tags'); const perPage = this.getAttribute('per-page'); const excludedTags = this.getAttribute('excluded-tags'); let locale = this.getAttribute('locale'); if (!locale) { locale = 'en'; } let targetElement = this._shadowRoot.querySelector(".feed-aggregator-component"); let feed = { id: 'blog', host: hosts[locale], type: 'json', minCount: 2, }; if (categories && categories !== 'all') { feed['category_ids'] = categories.split(','); } if (tags && tags !== 'all') { feed['tag_ids'] = tags.split(','); } if(excludedTags && excludedTags !== 'null'){ feed['excluded_tag_ids'] = excludedTags.split(','); } document.addEventListener("DOMContentLoaded", function () { new FeedAggregator({ target: targetElement, props: { count: perPage, openInNewTab: true, showExcerpts: true, feeds: [feed] } }); }) } } window.customElements.define('feed-aggregator', FeedAggregatorElement); </script> <template id='application-button-template'> <style> @import "https://dirms4qsy6412.cloudfront.net/assets/application-button/application-button-68ca7e1e3aa49ec79169d49226e34ee0c341d27a15a38b28ce975cb2467e123e.css"; </style> <a href='' class='nvidia-application-button'>Join now</a> </template> <script> async function fetchMembershipState () { const userInfo = await fetch('/api/user'); const {status} = userInfo; if (status === 401) { let error = new Error('Unauthorized'); error.statusCode = status; throw error; } // TODO: Figure out how to get DZ4 program // Fetch page info. const {pathname} = location; const pageInfo = await fetch(`${pathname}.json`); const pageData = await pageInfo.json(); // Fetch membership info return pageData; } const initApplicationButton = (element) => { const linkElement = element.querySelector('a'); fetchMembershipState() .then(data => { console.log(data); }) .catch(error => { switch (error.statusCode) { default: linkElement.innerHTML = 'Login'; linkElement.href = '/login'; } }); }; class NvidiaApplicationButton extends HTMLElement { constructor() { const template = document.getElementById('application-button-template'); super(); this._shadowRoot = this.attachShadow({ 'mode': 'open' }); this._shadowRoot.appendChild(template.content.cloneNode(true)); } connectedCallback() { const element = this._shadowRoot; document.addEventListener('DOMContentLoaded', () => { initApplicationButton(element); }); } } window.customElements.define('nv-application-button', NvidiaApplicationButton); </script> <template id='application-text-template'> <p></p> </template> <script> class NvidiaApplicationText extends HTMLElement { constructor() { const template = document.getElementById('application-text-template'); super(); this._shadowRoot = this.attachShadow({ 'mode': 'open' }); this._shadowRoot.appendChild(template.content.cloneNode(true)); } connectedCallback() { } } window.customElements.define('nv-application-text', NvidiaApplicationText); </script> <template id='nv-sf-form-validator-template'> <script src="https://dirms4qsy6412.cloudfront.net/assets/sf-validation/moment-620a5949fff0ad37198f07464b91d7b7c110ecdb6f94ca90ca7d2e1b471f1da8.js"></script> <script src="https://dirms4qsy6412.cloudfront.net/assets/sf-validation/validate.min-2160a65c1b5d4a5966544ad25af8fe99f11c636a99c516fee6c7afd3b1f21409.js"></script> <p></p> </template> <script> class NvidiaSalesforceValidator extends HTMLElement { constructor() { const template = document.getElementById('nv-sf-form-validator-template'); super(); this._shadowRoot = this.attachShadow({'mode': 'open'}); this._shadowRoot.appendChild(template.content.cloneNode(true)); } initComponent() { if (!window.sfv) { return; } validate.extend(validate.validators.datetime, { parse: function (value, options) { if (moment(value, options.format, true).isValid()) { return +moment.utc(value); } }, format: function (value, options) { var format = options.dateOnly ? "MM/DD/YYYY" : "MM/DD/YYYY hh:mm"; return moment.utc(value).format(format); } }); function showErrors(errors) { $.each(errors, function (index, element) { $('input[name="' + errors[index]['attribute'] + '"]').each(function (i, e) { var errorMessage = errors[index]['options']['message']; $('<div class="js-validation-errors">' + errorMessage + '</div>').insertAfter(e); }).focus(); }); } function isValidForm(form, constraints) { var errors = validate(form, constraints, {format: "detailed"}); if (errors) { showErrors(errors); return false; } return true; } $.each(window.sfv, function (index, element) { $(element.target).on('click', function (event) { $('.js-validation-errors').remove(); if (!isValidForm(element.form, element.constraints)) { event.preventDefault(); } }); }); } connectedCallback() { document.addEventListener('DOMContentLoaded', () => { this.initComponent(); }); } } window.customElements.define('nv-sf-form-validator', NvidiaSalesforceValidator); </script> <script src="https://dirms4qsy6412.cloudfront.net/assets/horizontal-chart/d3.v4.min-41cfecdf7c41476e805de7afacf4aacdd1a4be6947fbecf95217e947ebc2faf5.js"></script> <script src="https://dirms4qsy6412.cloudfront.net/assets/horizontal-chart/visualize-d-06443fdef48364af6635f0d1d3535da26910671f6f6a680c531eff0e54ed595f.js"></script> <template id="chart-template"> <style> @import "https://dirms4qsy6412.cloudfront.net/assets/devzone3/modules/nvidia_tokens/nvidia-charts-a459e90d273ab4f8b282e0f5fef607074b5fc7cbb5f8d0f0e378281320e6b9c8.css"; </style> <div class="horizontal-chart-component"> <div class="chart-container"> <h4 class="chart-title"></h4> <p class="chart-subtitle"></p> <div class="legend"></div> <svg data-nvidia-chart="true" data-chart-legend=""></svg> <p class="chart-footnote"></p> </div> </div> </template> <script> function chartInit(element) { const chart = element.querySelector('svg[data-nvidia-chart]'); const isRendered = chart.getAttribute("data-rendered"); if (isRendered) { return; } const svgChart = d3.select(chart); const bars = JSON.parse(chart.dataset['chartBars']); const ticks = JSON.parse(chart.dataset['chartTicks']); const xAxisLabel = chart.dataset['xAxisLabel']; const barPadding = chart.dataset['barPadding']; const milestone = null; const isGrouped = chart.dataset['isGrouped'] === 'true'; if (isGrouped) { const legend = JSON.parse(chart.dataset['chartLegend']); createGroupedHorizontalBarChart(svgChart, bars, barPadding, legend, ticks, milestone, xAxisLabel, false); } else { createHorizontalBarChart(svgChart, bars, barPadding, ticks, xAxisLabel, "", false); } chart.dataset['rendered'] = 'true'; } $('a[data-toggle="tab"]').on("click", function (event) { setTimeout(() => { // Triggering 'resize' event to redraw charts. window.dispatchEvent(new Event('resize')); const target = jQuery(event.target).parents('.nav.nav-tabs').siblings('.tab-content').find('.tab-pane.active'); if (target.length > 0) { const svg = jQuery(target).find('horizontal-chart'); if (svg.length) { svg.each((idx, el) => { setTimeout(function () { const chartContainer = el._shadowRoot.querySelector('.chart-container'); chartInit(chartContainer); }, 300); }); } } }, 50); }); async function loadFileSource(url) { try{ const response = await fetch(url); return response.json(); }catch (e) { console.warn(`Failed to load chart data. URL: ${url}`); } return {}; } class HorizontalChartElement extends HTMLElement { constructor() { const horizontalCharTemplate = document.getElementById('chart-template'); super(); this._shadowRoot = this.attachShadow({ 'mode': 'open' }); this._shadowRoot.appendChild(horizontalCharTemplate.content.cloneNode(true)); } connectedCallback() { const url = this.getAttribute('source'); const element = this._shadowRoot; document.addEventListener("DOMContentLoaded", function () { loadFileSource(url).then(data => { const { chartTitle: title, chartSubtitle: subTitle, chartFootnote: footNote, chartId: id, isGrouped: isGrouped, legendData, barPadding, xAxisLabel, bars, ticks } = data; element.querySelector('.chart-title').innerHTML = title; // Subtitle if (subTitle) { element.querySelector('.chart-subtitle').innerHTML = subTitle; } else { element.querySelector('.chart-subtitle').remove(); } // Chart const svgElement = element.querySelector('.chart-container svg'); svgElement.id = id; const dataAttributes = [ ['isGrouped', isGrouped ? 'true' : 'false', ''], ['chartLegend', JSON.stringify(legendData), ''], ['xAxisLabel', xAxisLabel, ''], ['barPadding', barPadding, 5], ['chartBars', bars, ''], ['chartTicks', ticks, ''], ]; dataAttributes.forEach(dataAttribute => { if (dataAttribute[0] === 'chartBars' && dataAttribute[1]) { dataAttribute[1] = JSON.stringify(dataAttribute[1]); } if (dataAttribute[0] === 'chartTicks' && dataAttribute[1]) { dataAttribute[1] = JSON.stringify(dataAttribute[1]); } svgElement.dataset[dataAttribute[0]] = dataAttribute[1] ? dataAttribute[1] : dataAttribute[2]; }); // Caption if (footNote) { element.querySelector('.chart-footnote').innerHTML = footNote; } else { element.querySelector('.chart-footnote').remove(); } // Init chart const chartContainer = element.querySelector('.chart-container'); setTimeout(function () { if (jQuery(chartContainer).is(':visible')) { chartInit(chartContainer); } }, 300); }); }) } } window.customElements.define('horizontal-chart', HorizontalChartElement); </script> <script src="https://dirms4qsy6412.cloudfront.net/assets/nv-developer-menu-09b6a95e79b8d8d44b0f1ac794e39d5adac82391d128f6d4d39715826a860020.js"></script> <script> let menuLocale = 'en'; if (menuLocale == 'en') { menuLocale = 'en-US'; } function mountHeader(data = false) { let options = { baseURL: window.location.origin, signedIn: false, locale: menuLocale }; if (data) { options.secondaryMenu = data; } options.showMembershipCardLink = true; new NVDeveloperHeader({ target: document.getElementById('header'), props: options }); } function mountFooter(data = false) { let options = { menu: data, locale: menuLocale }; new NVDeveloperFooter({ target: document.getElementById('footer'), props: options }); } let url = 'd29g4g2dyqv443.cloudfront.net'; let headerMenuURL = "https://d29g4g2dyqv443.cloudfront.net/menu/en-US/header-secondary.json"; fetch(headerMenuURL) .then(response => response.json()) .then(data => { mountHeader(data); }) .catch((error) => { mountHeader(); window.nv.tracing.addError('menu', error); }); fetch(`https://${url}/menu/${menuLocale}/footer.json`) .then(response => response.json()) .then(data => { mountFooter(data); }) .catch((error) => { mountFooter(); window.nv.tracing.addError('menu', error); }); </script> <script src="https://www.datadoghq-browser-agent.com/us1/v5/datadog-rum.js"></script> <script> let silentAuthHost = 'www.nvidia.com'; let crossOriginPageUrl = `https://${silentAuthHost}/auth/hints/`; function readHint() { return new Promise((resolve) => { const { origin: targetOrigin } = new URL(crossOriginPageUrl); const iframe = document.createElement('iframe'); iframe.hidden = true; iframe.src = crossOriginPageUrl; function responseHandler(event) { if (event.origin === targetOrigin) { iframe.parentNode.removeChild(iframe); return resolve(event.data); } } window.addEventListener('message', responseHandler, { once: true }); iframe.onload = () => { iframe.contentWindow.postMessage({ type: 'read' }, targetOrigin); } document.body.appendChild(iframe); }); } function writeHint(login_hint, idp_id, timestamp, sub) { const { origin: targetOrigin } = new URL(crossOriginPageUrl); const iframe = document.createElement('iframe'); iframe.hidden = true; iframe.src = crossOriginPageUrl; iframe.onload = () => { const message = { type: 'write', login_hint, idp_id, timestamp, sub }; iframe.contentWindow.postMessage(message, targetOrigin); } document.body.appendChild(iframe); } function deleteHint() { const { origin: targetOrigin } = new URL(crossOriginPageUrl); const iframe = document.createElement('iframe'); iframe.hidden = true; iframe.src = crossOriginPageUrl; iframe.onload = () => { iframe.contentWindow.postMessage({ type: 'delete' }, targetOrigin); } document.body.appendChild(iframe); } </script> <script>_satellite.pageBottom();</script> <script src="https://dirms4qsy6412.cloudfront.net/packs/js/runtime-503119e3bfeec75056bc.js" defer="defer"></script> <script src="https://dirms4qsy6412.cloudfront.net/packs/js/692-70104789368a40f2d231.js" defer="defer"></script> <script src="https://dirms4qsy6412.cloudfront.net/packs/js/341-3761d2892158034dde54.js" defer="defer"></script> <script src="https://dirms4qsy6412.cloudfront.net/packs/js/171-72c84e9bea55d778d36a.js" defer="defer"></script> <script src="https://dirms4qsy6412.cloudfront.net/packs/js/866-f9c34b19d1b60b883caf.js" defer="defer"></script> <script src="https://dirms4qsy6412.cloudfront.net/packs/js/311-033b6299b51897e65419.js" defer="defer"></script> <script src="https://dirms4qsy6412.cloudfront.net/packs/js/252-f83b27d9f72fef366bc7.js" defer="defer"></script> <script src="https://dirms4qsy6412.cloudfront.net/packs/js/582-d6d587645f7c87937f15.js" defer="defer"></script> <script src="https://dirms4qsy6412.cloudfront.net/packs/js/900-df684e5d2e49c0841d7f.js" defer="defer"></script> <script src="https://dirms4qsy6412.cloudfront.net/packs/js/application-34e06828c108fa024662.js" defer="defer"></script> <script src="https://dirms4qsy6412.cloudfront.net/packs/js/ls_track-4ba11c63b23b3f4ff0d5.js" defer="defer"></script> </body> </html>

Pages: 1 2 3 4 5 6 7 8 9 10