CINXE.COM
Performance RNN: Generating Music with Expressive Timing and Dynamics
<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Performance RNN: Generating Music with Expressive Timing and Dynamics</title> <meta name="description" content="We present Performance RNN, an LSTM-based recurrent neural network designed to model polyphonic music with expressive timing and dynamics. Here’s an example..."> <!-- OpenGraph data --> <meta property="og:image" content="https://magenta.tensorflow.org/assets/performance_rnn/pianoroll.png"> <meta property="og:title" content="Performance RNN: Generating Music with Expressive Timing and Dynamics"> <meta property="og:description" content="We present Performance RNN, an LSTM-based recurrent neural network designed to model polyphonic music with expressive timing and dynamics. Here’s an example..."> <meta property="og:url" content="https://magenta.tensorflow.org/performance-rnn"> <meta property="og:site_name" content="Magenta"> <!-- Twitter Card data --> <meta name="twitter:card" content="summary"> <meta name="twitter:title" content="Performance RNN: Generating Music with Expressive Timing and Dynamics"> <meta name="twitter:description" content="We present Performance RNN, an LSTM-based recurrent neural network designed to model polyphonic music with expressive timing and dynamics. Here’s an example..."> <meta name="twitter:image" content="https://magenta.tensorflow.org/assets/performance_rnn/pianoroll.png"> <link rel="stylesheet" href="/css/main.css"> <link rel="canonical" href="https://magenta.tensorflow.org/performance-rnn"> <link rel="alternate" type="application/rss+xml" title="Magenta" href="https://magenta.tensorflow.org/feed.xml"> <link href="https://fonts.googleapis.com/css?family=Google+Sans:+400,500,700" media="all" rel="stylesheet"> <script src="//www.google.com/js/gweb/analytics/autotrack.js"></script> <script> new gweb.analytics.AutoTrack({ profile: 'UA-80107903-1' }); </script> </head> <body> <div class="scrim" onclick="document.body.classList.toggle('drawer-opened', false)"></div> <header> <div class="top-bar background"> <div class="top-bar-content"> <div class="logo"> <a href="/"><img src="/assets/magenta-logo.png" height="70" alt="magenta logo"></a> </div> <nav> <button class="menu-button" onclick="document.body.classList.toggle('drawer-opened', true)" aria-label="open nav menu"> <svg viewBox="0 0 18 15"> <path fill="#424242" d="M18,1.484c0,0.82-0.665,1.484-1.484,1.484H1.484C0.665,2.969,0,2.304,0,1.484l0,0C0,0.665,0.665,0,1.484,0 h15.031C17.335,0,18,0.665,18,1.484L18,1.484z"/> <path fill="#424242" d="M18,7.516C18,8.335,17.335,9,16.516,9H1.484C0.665,9,0,8.335,0,7.516l0,0c0-0.82,0.665-1.484,1.484-1.484 h15.031C17.335,6.031,18,6.696,18,7.516L18,7.516z"/> <path fill="#424242" d="M18,13.516C18,14.335,17.335,15,16.516,15H1.484C0.665,15,0,14.335,0,13.516l0,0 c0-0.82,0.665-1.484,1.484-1.484h15.031C17.335,12.031,18,12.696,18,13.516L18,13.516z"/> </svg> </button> <div class="links"> <a href="/get-started">Get Started</a> <a href="/studio">Studio</a> <a href="/ddsp-vst">DDSP-VST</a> <a href="/demos">Demos</a> <a href="/blog">Blog</a> <a href="/research">Research</a> <a href="/talks">Talks</a> <a href="/community">Community</a> </div> </nav> </div> </div> </header> <div class="drawer"> <div class="drawer-content"> <a href="/get-started">Get Started</a> <a href="/studio">Studio</a> <a href="/ddsp-vst">DDSP-VST</a> <a href="/demos">Demos</a> <a href="/blog">Blog</a> <a href="/research">Research</a> <a href="/talks">Talks</a> <a href="/community">Community</a> </div> </div> <div class="main"> <section class="white"> <article class="content single" itemscope itemtype="http://schema.org/BlogPosting"> <h1 class="post-title" itemprop="name headline">Performance RNN: Generating Music with Expressive Timing and Dynamics</h1> <p class="post-meta"> <time datetime="2017-06-29T06:00:00-07:00" itemprop="datePublished">Jun 29, 2017</time> <br> <a class="inverted" href=https://g.co/magenta/ian_simon>Ian Simon</a> <a class="inverted" href="https://github.com/iansimon" alt="github logo"><span class="icon icon--github"><svg viewBox="0 0 16 16"><path d="M7.999,0.431c-4.285,0-7.76,3.474-7.76,7.761 c0,3.428,2.223,6.337,5.307,7.363c0.388,0.071,0.53-0.168,0.53-0.374c0-0.184-0.007-0.672-0.01-1.32 c-2.159,0.469-2.614-1.04-2.614-1.04c-0.353-0.896-0.862-1.135-0.862-1.135c-0.705-0.481,0.053-0.472,0.053-0.472 c0.779,0.055,1.189,0.8,1.189,0.8c0.692,1.186,1.816,0.843,2.258,0.645c0.071-0.502,0.271-0.843,0.493-1.037 C4.86,11.425,3.049,10.76,3.049,7.786c0-0.847,0.302-1.54,0.799-2.082C3.768,5.507,3.501,4.718,3.924,3.65 c0,0,0.652-0.209,2.134,0.796C6.677,4.273,7.34,4.187,8,4.184c0.659,0.003,1.323,0.089,1.943,0.261 c1.482-1.004,2.132-0.796,2.132-0.796c0.423,1.068,0.157,1.857,0.077,2.054c0.497,0.542,0.798,1.235,0.798,2.082 c0,2.981-1.814,3.637-3.543,3.829c0.279,0.24,0.527,0.713,0.527,1.437c0,1.037-0.01,1.874-0.01,2.129 c0,0.208,0.14,0.449,0.534,0.373c3.081-1.028,5.302-3.935,5.302-7.362C15.76,3.906,12.285,0.431,7.999,0.431z"/></svg> </span><span class="username">iansimon</span></a> <a class="inverted" href="https://twitter.com/iansimon" alt="twitter logo"><span class="icon icon--twitter"><svg viewBox="0 0 16 16"><path d="M15.969,3.058c-0.586,0.26-1.217,0.436-1.878,0.515c0.675-0.405,1.194-1.045,1.438-1.809c-0.632,0.375-1.332,0.647-2.076,0.793c-0.596-0.636-1.446-1.033-2.387-1.033c-1.806,0-3.27,1.464-3.27,3.27 c0,0.256,0.029,0.506,0.085,0.745C5.163,5.404,2.753,4.102,1.14,2.124C0.859,2.607,0.698,3.168,0.698,3.767 c0,1.134,0.577,2.135,1.455,2.722C1.616,6.472,1.112,6.325,0.671,6.08c0,0.014,0,0.027,0,0.041c0,1.584,1.127,2.906,2.623,3.206 C3.02,9.402,2.731,9.442,2.433,9.442c-0.211,0-0.416-0.021-0.615-0.059c0.416,1.299,1.624,2.245,3.055,2.271 c-1.119,0.877-2.529,1.4-4.061,1.4c-0.264,0-0.524-0.015-0.78-0.046c1.447,0.928,3.166,1.469,5.013,1.469 c6.015,0,9.304-4.983,9.304-9.304c0-0.142-0.003-0.283-0.009-0.423C14.976,4.29,15.531,3.714,15.969,3.058z"/></svg> </span><span class="username">iansimon</span></a> <br/> Sageev Oore <a class="inverted" href="https://github.com/osageev" alt="github logo"><span class="icon icon--github"><svg viewBox="0 0 16 16"><path d="M7.999,0.431c-4.285,0-7.76,3.474-7.76,7.761 c0,3.428,2.223,6.337,5.307,7.363c0.388,0.071,0.53-0.168,0.53-0.374c0-0.184-0.007-0.672-0.01-1.32 c-2.159,0.469-2.614-1.04-2.614-1.04c-0.353-0.896-0.862-1.135-0.862-1.135c-0.705-0.481,0.053-0.472,0.053-0.472 c0.779,0.055,1.189,0.8,1.189,0.8c0.692,1.186,1.816,0.843,2.258,0.645c0.071-0.502,0.271-0.843,0.493-1.037 C4.86,11.425,3.049,10.76,3.049,7.786c0-0.847,0.302-1.54,0.799-2.082C3.768,5.507,3.501,4.718,3.924,3.65 c0,0,0.652-0.209,2.134,0.796C6.677,4.273,7.34,4.187,8,4.184c0.659,0.003,1.323,0.089,1.943,0.261 c1.482-1.004,2.132-0.796,2.132-0.796c0.423,1.068,0.157,1.857,0.077,2.054c0.497,0.542,0.798,1.235,0.798,2.082 c0,2.981-1.814,3.637-3.543,3.829c0.279,0.24,0.527,0.713,0.527,1.437c0,1.037-0.01,1.874-0.01,2.129 c0,0.208,0.14,0.449,0.534,0.373c3.081-1.028,5.302-3.935,5.302-7.362C15.76,3.906,12.285,0.431,7.999,0.431z"/></svg> </span><span class="username">osageev</span></a> <a class="inverted" href="https://twitter.com/osageev" alt="twitter logo"><span class="icon icon--twitter"><svg viewBox="0 0 16 16"><path d="M15.969,3.058c-0.586,0.26-1.217,0.436-1.878,0.515c0.675-0.405,1.194-1.045,1.438-1.809c-0.632,0.375-1.332,0.647-2.076,0.793c-0.596-0.636-1.446-1.033-2.387-1.033c-1.806,0-3.27,1.464-3.27,3.27 c0,0.256,0.029,0.506,0.085,0.745C5.163,5.404,2.753,4.102,1.14,2.124C0.859,2.607,0.698,3.168,0.698,3.767 c0,1.134,0.577,2.135,1.455,2.722C1.616,6.472,1.112,6.325,0.671,6.08c0,0.014,0,0.027,0,0.041c0,1.584,1.127,2.906,2.623,3.206 C3.02,9.402,2.731,9.442,2.433,9.442c-0.211,0-0.416-0.021-0.615-0.059c0.416,1.299,1.624,2.245,3.055,2.271 c-1.119,0.877-2.529,1.4-4.061,1.4c-0.264,0-0.524-0.015-0.78-0.046c1.447,0.928,3.166,1.469,5.013,1.469 c6.015,0,9.304-4.983,9.304-9.304c0-0.142-0.003-0.283-0.009-0.423C14.976,4.29,15.531,3.714,15.969,3.058z"/></svg> </span><span class="username">osageev</span></a> <br/> </p> <div class="article-body" itemprop="articleBody"> <p>We present Performance RNN, an LSTM-based recurrent neural network designed to model polyphonic music with expressive timing and dynamics. Here’s an example generated by the model:</p> <center><div><audio src="/assets/performance_rnn/good.mp3" controls=""> </audio></div></center> <!--more--> <p>Note that this isn’t a performance of an existing piece; the model is also choosing the notes to play, “composing” a performance directly. The performances generated by the model lack the overall coherence that one might expect from a piano composition; in musical jargon, it might sound like the model is “noodling”— playing without a long-term structure. However, to our ears, the <em>local</em> characteristics of the performance (i.e. the phrasing within a one or two second time window) are quite expressive.</p> <p>In the remainder of this post, we describe some of the ingredients that make the model work; we believe it is the <em>training dataset</em> and <em>musical representation</em> that are most interesting, rather than the neural network architecture.</p> <h1 id="overview">Overview</h1> <p>Expressive timing and dynamics are an essential part of music. Listen to the following two clips of the same Chopin piece, the first of which has been stripped of these qualities:</p> <div class="image-audios" style="margin-bottom:30px; margin-top:30px; font-size:small"> <div style="padding-bottom:60px"> <div style="float:left"> <div style="padding-left:10px">Chopin (quantized)</div> <div><audio src="/assets/performance_rnn/chopin-quantized.mp3" controls=""> </audio></div> </div> <div style="float:right"> <div style="padding-left:10px">Chopin (performed by Sageev Oore)</div> <div><audio src="/assets/performance_rnn/chopin-unquantized.mp3" controls=""> </audio></div> </div> </div> </div> <p>The first clip is just a direct rendering of the score, but with all notes at the same volume and quantized to 16th notes. The second clip is a MIDI-recorded human performance with phrasing. Notice how the same notes lead to an entirely different musical experience. That difference motivates this work.</p> <p>Performance RNN generates expressive timing and dynamics via a stream of MIDI events. At a basic level, MIDI consists of precisely-timed <em>note-on</em> and <em>note-off</em> events, each of which specifies the pitch of the note. Note-on events also include <em>velocity</em>, or how hard to strike the note.</p> <p>These events are then imported into a standard synthesizer to create the “sound” of the piano. In other words, the model only determines which notes to play, when to play them, and how hard to strike each note. It doesn’t create the audio directly.</p> <h1 id="dataset">Dataset</h1> <p>The model is trained on the <a href="http://www.piano-e-competition.com/">Yamaha e-Piano Competition dataset</a>, which contains MIDI captures of ~1400 performances by skilled pianists. <a href="http://imanmalik.com/cs/2017/06/05/neural-style.html">A prior blog post by Iman Malik</a> also found this dataset useful for learning dynamics (velocities) conditioned on notes, while in our case we model entire musical sequences with notes and dynamics.</p> <p>The Yamaha dataset possesses several characteristics which we believe make it effective in this context:</p> <ol> <li>Note timings are based on human performance rather than a score.</li> <li>Note velocities are based on human performance, i.e. with how much force did the performer strike each note?</li> <li>All of the pieces were composed for and performed on one single instrument: piano.</li> <li>All of the pieces were repertoire selections from a classical piano competition. This implies certain statistical constraints and coherence in the data set.</li> </ol> <p>We have also trained on a less carefully dataset having the first three of the above characteristics, with some success. Thus far, however, samples generated by models trained on the Yamaha dataset have been superior.</p> <h1 id="representation">Representation</h1> <div style="text-align:center;margin-bottom:30px"> <img src="/assets/performance_rnn/pianoroll.png" width="800" /> </div> <p>Our performance representation is a MIDI-like stream of musical events. Specifically, we use the following set of events:</p> <ul> <li>128 <strong>note-on</strong> events, one for each of the 128 MIDI pitches. These events start a new note.</li> <li>128 <strong>note-off</strong> events, one for each of the 128 MIDI pitches. These events release a note.</li> <li>100 <strong>time-shift</strong> events in increments of 10 ms up to 1 second. These events move forward in time to the next note event.</li> <li>32 <strong>velocity</strong> events, corresponding to MIDI velocities quantized into 32 bins. These events change the velocity applied to subsequent notes.</li> </ul> <p>The neural network operates on a one-hot encoding over these 388 different events. A typical 30-second clip might contain ~1200 such one-hot vectors.</p> <p>It’s worth going into some more detail on the timing representation. <a href="https://github.com/tensorflow/magenta/tree/main/magenta/models/melody_rnn">Previous</a> <a href="https://github.com/tensorflow/magenta/tree/main/magenta/models/polyphony_rnn">Magenta</a> <a href="https://github.com/tensorflow/magenta/tree/main/magenta/models/pianoroll_rnn_nade">models</a> used a fixed metrical grid where a) output was generated for every time step, and b) the step size was tied to a fixed meter e.g. a 16th note at a particular tempo. Here, we discard both of those conventions: a time “step” is now a fixed absolute size (10 ms), and the model can skip forward in time to the next note event. This fine quantization is able to capture more expressiveness in note timings. And the sequence representation uses many more events in sections with high note density, which matches our intuition.</p> <p><strong>One way to think about this performance representation is as a compressed version of a fixed step size representation</strong>, where we skip over all steps that consist of “just hold whatever notes you were already playing and don’t play any new ones”. <a href="https://highnoongmt.wordpress.com/2017/06/14/even-more-endless-music-sessions/">As observed by Bob Sturm</a>, this frees the model from having to learn to repeat those steps the desired number of times.</p> <h1 id="preprocessing">Preprocessing</h1> <p>To create additional training examples, we apply time-stretching (making each performance up to 5% faster or slower) and transposition (raising or lowering the pitch of each performance by up to a major third).</p> <p>We also split each performance into 30-second segments to keep each example of manageable size. We find that the model is still capable of generating longer performances without falling over, though of course these performances have little-to-no long-term structure. Here’s a 5-minute performance:</p> <center><div><audio src="/assets/performance_rnn/long.mp3" controls=""> </audio></div></center> <h1 id="more-examples">More Examples</h1> <p>The performance at the top of the page is one of the better ones from Performance RNN, but almost all samples from the model tend to have interesting moments. Here are some additional performances generated by the model:</p> <div class="image-audios" style="margin-bottom:30px; margin-top:30px; font-size:small"> <div style="padding-bottom:60px"> <div style="float:left"> <div><audio src="/assets/performance_rnn/1.mp3" controls=""> </audio></div> </div> <div style="float:right"> <div><audio src="/assets/performance_rnn/2.mp3" controls=""> </audio></div> </div> </div> <div style="padding-bottom:60px"> <div style="float:left"> <div><audio src="/assets/performance_rnn/3.mp3" controls=""> </audio></div> </div> <div style="float:right"> <div><audio src="/assets/performance_rnn/4.mp3" controls=""> </audio></div> </div> </div> <div style="padding-bottom:60px"> <div style="float:left"> <div><audio src="/assets/performance_rnn/5.mp3" controls=""> </audio></div> </div> <div style="float:right"> <div><audio src="/assets/performance_rnn/6.mp3" controls=""> </audio></div> </div> </div> <div style="padding-bottom:60px"> <div style="float:left"> <div><audio src="/assets/performance_rnn/7.mp3" controls=""> </audio></div> </div> <div style="float:right"> <div><audio src="/assets/performance_rnn/8.mp3" controls=""> </audio></div> </div> </div> </div> <h1 id="temperature">Temperature</h1> <p>Can we control the output of the model at all? Generally, this is an open research question; however, one typical knob available in such models is a parameter referred to as <em>temperature</em> that affects the randomness of the samples. A temperature of 1.0 uses the model’s predicted event distribution as is. This is the setting used for all previous examples in this post.</p> <p>Decreasing temperature reduces the randomness of the event distribution, which can make performances sound repetitive. For small decreases this may be an improvement, as some repetition is natural in music. At 0.8, however, the performance seems to overly fixate on a few patterns:</p> <div class="image-audios" style="margin-bottom:30px; margin-top:30px; font-size:small"> <div style="padding-bottom:60px"> <div style="float:left"> <div style="padding-left:10px">Temperature = 0.9</div> <div><audio src="/assets/performance_rnn/temp_09.mp3" controls=""> </audio></div> </div> <div style="float:right"> <div style="padding-left:10px">Temperature = 0.8</div> <div><audio src="/assets/performance_rnn/temp_08.mp3" controls=""> </audio></div> </div> </div> </div> <p>Increasing the temperature increases the randomness of the event distribution:</p> <div class="image-audios" style="margin-bottom:30px; margin-top:30px; font-size:small"> <div style="padding-bottom:60px"> <div style="float:left"> <div style="padding-left:10px">Temperature = 1.1</div> <div><audio src="/assets/performance_rnn/temp_11.mp3" controls=""> </audio></div> </div> <div style="float:right"> <div style="padding-left:10px">Temperature = 1.2</div> <div><audio src="/assets/performance_rnn/temp_12.mp3" controls=""> </audio></div> </div> </div> </div> <p>Here’s what happens if we increase the temperature even further:</p> <div class="image-audios" style="margin-bottom:30px; margin-top:30px; font-size:small"> <div style="padding-bottom:60px"> <div style="float:left"> <div style="padding-left:10px">Temperature = 1.5</div> <div><audio src="/assets/performance_rnn/temp_15.mp3" controls=""> </audio></div> </div> <div style="float:right"> <div style="padding-left:10px">Temperature = 2.0</div> <div><audio src="/assets/performance_rnn/temp_20.mp3" controls=""> </audio></div> </div> </div> </div> <h1 id="try-it-out">Try It Out!</h1> <p>We have released the code for Performance RNN in our <a href="https://github.com/tensorflow/magenta/tree/main/magenta/models/performance_rnn">open-source Magenta repository</a>, along with two pretrained models: <a href="http://download.magenta.tensorflow.org/models/performance_with_dynamics.mag">one with dynamics</a>, and <a href="http://download.magenta.tensorflow.org/models/performance.mag">one without</a>. Magenta installation instructions are <a href="https://github.com/tensorflow/magenta#installation">here</a>.</p> <p>Let us know what you think in our <a href="http://groups.google.com/a/tensorflow.org/forum/#!forum/magenta-discuss">discussion group</a>, especially if you create any interesting samples of your own.</p> <p>An arXiv paper with more details is forthcoming. In the meantime, if you’d like to cite this work, please cite this blog post as</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Ian Simon and Sageev Oore. "Performance RNN: Generating Music with Expressive Timing and Dynamics." Magenta Blog, 2017. https://magenta.tensorflow.org/performance-rnn </code></pre></div></div> <p>to differentiate it from the other music generation models released by Magenta. You can also use the following BibTeX entry:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@misc{performance-rnn-2017, author = {Ian Simon and Sageev Oore}, title = { Performance RNN: Generating Music with Expressive Timing and Dynamics }, journal = {Magenta Blog}, type = {Blog}, year = {2017}, howpublished = {\url{https://magenta.tensorflow.org/performance-rnn}} } </code></pre></div></div> <h1 id="acknowledgements">Acknowledgements</h1> <p>We thank Sander Dieleman for discussion and pointing us to the Yamaha dataset, and Kory Mathewson, David Ha, and Doug Eck for many helpful suggestions when writing this post. We thank Adam Roberts for discussion and much technical assistance.</p> </div> </article> </section> </div> <footer> <div class="footer-content"> <div class="logo"> <a href="https://ai.google/" target="_blank" rel="noopener" title="Google AI"> <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 157.2 40.88"><defs><style>.cls-1{fill:none;}.cls-2{fill:#80868b;}.cls-3{fill:#80868b;}.cls-4{fill:#80868b;}.cls-5{fill:#80868b;}.cls-6{fill:#80868b;}</style></defs><g id="Running_copy" data-name="Running copy"><path class="cls-1" d="M82.91,16.35A4.8,4.8,0,0,0,79.29,18a5.66,5.66,0,0,0-1.49,4,5.53,5.53,0,0,0,1.49,3.94,4.78,4.78,0,0,0,3.62,1.58,4.47,4.47,0,0,0,3.49-1.58A5.7,5.7,0,0,0,87.81,22a5.84,5.84,0,0,0-1.41-4A4.48,4.48,0,0,0,82.91,16.35Z"></path><path class="cls-1" d="M42.8,16.35a4.92,4.92,0,0,0-3.66,1.57,5.49,5.49,0,0,0-1.51,4,5.52,5.52,0,0,0,1.52,4,5,5,0,0,0,7.3,0,5.48,5.48,0,0,0,1.53-4,5.49,5.49,0,0,0-1.51-4A4.93,4.93,0,0,0,42.8,16.35Z"></path><path class="cls-1" d="M62.89,16.35a4.93,4.93,0,0,0-3.67,1.57,5.53,5.53,0,0,0-1.51,4,5.48,5.48,0,0,0,1.53,4,5,5,0,0,0,7.3,0,5.48,5.48,0,0,0,1.53-4,5.49,5.49,0,0,0-1.51-4A4.93,4.93,0,0,0,62.89,16.35Z"></path><path class="cls-1" d="M111,16.82a4.15,4.15,0,0,0-2.12-.54,4.79,4.79,0,0,0-3.32,1.46,4.9,4.9,0,0,0-1.47,3.9l8.2-3.41A2.82,2.82,0,0,0,111,16.82Z"></path><rect class="cls-2" x="94.13" y="3.56" width="4.03" height="26.97"></rect><path class="cls-3" d="M42.8,12.74a9,9,0,0,0-6.53,2.62,8.83,8.83,0,0,0-2.68,6.55,8.84,8.84,0,0,0,2.68,6.56,9.46,9.46,0,0,0,13.07,0A8.83,8.83,0,0,0,52,21.91a8.82,8.82,0,0,0-2.67-6.55A9,9,0,0,0,42.8,12.74Zm3.65,13.15a5,5,0,0,1-7.3,0,5.52,5.52,0,0,1-1.52-4,5.49,5.49,0,0,1,1.51-4,5.06,5.06,0,0,1,7.33,0,5.49,5.49,0,0,1,1.51,4A5.48,5.48,0,0,1,46.45,25.89Z"></path><path class="cls-4" d="M18.89,15.55v3.9h9.32a8.27,8.27,0,0,1-2.12,4.9,9.76,9.76,0,0,1-7.2,2.85,9.75,9.75,0,0,1-7.24-3,10.07,10.07,0,0,1-3-7.33,10.07,10.07,0,0,1,3-7.33,9.75,9.75,0,0,1,7.24-3,9.89,9.89,0,0,1,7,2.78l2.75-2.74a13.63,13.63,0,0,0-9.77-3.93A14.07,14.07,0,0,0,8.71,6.78,13.58,13.58,0,0,0,4.44,16.84,13.56,13.56,0,0,0,8.71,26.9a14.07,14.07,0,0,0,10.18,4.19,13.12,13.12,0,0,0,9.94-4q3.38-3.36,3.37-9.1a12.59,12.59,0,0,0-.2-2.44Z"></path><path class="cls-4" d="M87.53,14.79h-.14a5.64,5.64,0,0,0-2-1.46,6.66,6.66,0,0,0-2.83-.59,8.37,8.37,0,0,0-6.15,2.69A9,9,0,0,0,73.77,22a8.86,8.86,0,0,0,2.64,6.46,8.36,8.36,0,0,0,6.15,2.68A5.87,5.87,0,0,0,87.39,29h.14v1.32a5.63,5.63,0,0,1-1.3,4,4.69,4.69,0,0,1-3.6,1.39,4.34,4.34,0,0,1-2.88-1A5.94,5.94,0,0,1,78,32.44L74.5,33.9a9.43,9.43,0,0,0,3,3.79,8.07,8.07,0,0,0,5.14,1.64,8.61,8.61,0,0,0,6.27-2.39c1.64-1.58,2.45-4,2.45-7.17V13.3H87.53ZM86.4,25.89a4.47,4.47,0,0,1-3.49,1.58,4.78,4.78,0,0,1-3.62-1.58A5.53,5.53,0,0,1,77.8,22a5.66,5.66,0,0,1,1.49-4,4.8,4.8,0,0,1,3.62-1.6A4.48,4.48,0,0,1,86.4,18a5.84,5.84,0,0,1,1.41,4A5.7,5.7,0,0,1,86.4,25.89Z"></path><path class="cls-5" d="M62.89,12.74a9,9,0,0,0-6.53,2.62,8.79,8.79,0,0,0-2.68,6.55,8.8,8.8,0,0,0,2.68,6.56,9.45,9.45,0,0,0,13.06,0,8.8,8.8,0,0,0,2.68-6.56,8.79,8.79,0,0,0-2.68-6.55A9,9,0,0,0,62.89,12.74Zm3.65,13.15a5,5,0,0,1-7.3,0,5.48,5.48,0,0,1-1.53-4,5.53,5.53,0,0,1,1.51-4,5.07,5.07,0,0,1,7.34,0,5.49,5.49,0,0,1,1.51,4A5.48,5.48,0,0,1,66.54,25.89Z"></path><path class="cls-3" d="M109.22,27.47a4.68,4.68,0,0,1-4.45-2.78L117,19.62l-.42-1a11,11,0,0,0-.91-1.81,10.64,10.64,0,0,0-1.49-1.86,7.14,7.14,0,0,0-2.36-1.56,7.73,7.73,0,0,0-3.1-.61,8.27,8.27,0,0,0-6.13,2.57,9.05,9.05,0,0,0-2.52,6.6,8.93,8.93,0,0,0,2.61,6.54,8.74,8.74,0,0,0,6.5,2.64,8.43,8.43,0,0,0,4.69-1.25,10.13,10.13,0,0,0,3-2.82l-3.13-2.08A5.26,5.26,0,0,1,109.22,27.47Zm-3.64-9.73a4.79,4.79,0,0,1,3.32-1.46,4.15,4.15,0,0,1,2.12.54,2.82,2.82,0,0,1,1.29,1.41l-8.2,3.41A4.9,4.9,0,0,1,105.58,17.74Z"></path><path class="cls-6" d="M127.47,30.54h-3.55l9.39-24.9h3.62l9.39,24.9h-3.55l-2.4-6.75H129.9Zm7.58-21L131,20.8h8.28L135.19,9.57Z"></path><path class="cls-6" d="M152.44,30.54h-3.2V5.64h3.2Z"></path></g></svg> </a> </div> <ul> <li> <a href="https://twitter.com/search?q=%23madewithmagenta" target="_blank" rel="noopener"> Twitter </a> </li> <li> <a href="/blog" target="_blank" rel="noopener"> Blog </a> </li> <li> <a href="https://github.com/tensorflow/magenta" target="_blank" rel="noopener"> GitHub </a> </li> <li> <a href="https://www.google.com/policies/privacy/" target="_blank" rel="noopener"> Privacy </a> </li> <li> <a href="https://www.google.com/policies/terms/" target="_blank" rel="noopener"> Terms </a> </li> </ul> </div> </footer> </body> <script src="/js/main.js"></script> </html>