CINXE.COM
How does Batch Normalization Help Optimization? – gradient science
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <title>How does Batch Normalization Help Optimization? – gradient science</title> <link rel="dns-prefetch" href="//maxcdn.bootstrapcdn.com"> <link rel="dns-prefetch" href="//cdnjs.cloudflare.com"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="description" content="An closer look at one of the most popular techniques in modern deep learning."> <meta name="robots" content="all"> <meta name="author" content="Madry Lab"> <meta name="keywords" content="science"> <link rel="canonical" href="https://gradientscience.org/batchnorm/"> <link rel="alternate" type="application/rss+xml" title="RSS Feed for gradient science" href="/feed.xml" /> <!-- Custom CSS --> <link rel="stylesheet" href="/css/pixyll.css?202409050530" type="text/css"> <!-- Fonts --> <link href='//fonts.googleapis.com/css?family=Merriweather:900,900italic,300,300italic' rel='stylesheet' type='text/css'> <link href='//fonts.googleapis.com/css?family=Lato:900,300' rel='stylesheet' type='text/css'> <link href="//maxcdn.bootstrapcdn.com/font-awesome/latest/css/font-awesome.min.css" rel="stylesheet"> <!-- MathJax --> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [['$','$'], ['\\(','\\)']], processEscapes: true } }); </script> <script type="text/javascript" async src="//cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"> </script> <!-- Verifications --> <!-- Open Graph --> <!-- From: https://github.com/mmistakes/hpstr-jekyll-theme/blob/master/_includes/head.html --> <meta property="og:locale" content="en_US"> <meta property="og:type" content="article"> <meta property="og:title" content="How does Batch Normalization Help Optimization?"> <meta property="og:description" content="Research highlights and perspectives on machine learning and optimization from MadryLab."> <meta property="og:url" content="https://gradientscience.org/batchnorm/"> <meta property="og:site_name" content="gradient science"> <!-- Twitter Card --> <meta name="twitter:card" content="summary" /> <meta name="twitter:title" content="How does Batch Normalization Help Optimization?" /> <meta name="twitter:description" content="An closer look at one of the most popular techniques in modern deep learning." /> <meta name="twitter:url" content="https://gradientscience.org/batchnorm/" /> <!-- Icons --> <link rel="apple-touch-icon" sizes="180x180" href="/apple-touch-icon.png"> <link rel="icon" type="image/png" sizes="32x32" href="/favicon-32x32.png"> <link rel="icon" type="image/png" sizes="16x16" href="/favicon-16x16.png"> <link rel="mask-icon" href="/safari-pinned-tab.svg" color="#5bbad5"> <meta name="theme-color" content="#ffffff"> <script type="text/javascript"> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-121873647-1', 'auto'); ga('send', 'pageview'); </script> </head> <body class="site"> <div class="site-wrap"> <header class="site-header px2 px-responsive"> <div class="mt2 wrap"> <div class="measure"> <a href="/" class="site-title"> <img src="/images/logo-short-red.png" alt="" width="60" style="margin: 0px 20px 0px 0px"> </a> <a href="/" class="site-title" style="margin-top: 18px; color: #A31F34">gradient science</a> <nav class="site-nav"> <a href="/about/">About</a> <a href="/feed.xml">RSS feed</a> <a href="/contact/">Contact</a> </nav> <div class="clearfix"></div> </div> </div> </header> <div class="post p2 p-responsive wrap" role="main"> <div class="measure"> <div class="post-header mb2"> <h1>How does Batch Normalization Help Optimization?</h1> <span class="post-meta">Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, Aleksander Mądry • Nov 26, 2018</span><br> <span class="post-meta small"> 12 minute read </span> </div> <article class="post-content"> <style> img{ max-width: 100%; } </style> <p>Supervised deep learning is, by now, relatively stable from an engineering point of view. Training an image classifier on any dataset can be done with ease, and requires little of the architecture, hyperparameter, and infrastructure tinkering that was needed just a few years ago. Nevertheless, getting a precise understanding of how different elements of the framework play their part in making deep learning stable remains a challenge.</p> <p>Today, we explore this challenge in the context of batch normalization (BatchNorm), one of the most widely used tools in modern deep learning. Broadly speaking, BatchNorm is a technique that aims to whiten activation distributions by controlling the mean and standard deviation of layer outputs (across a batch of examples). Specifically, for an activation \(y_j\) of layer \(y\), we have that:</p> <p>\begin{equation} BN(y_j)^{(b)} = \gamma \cdot \left(\frac{y_j^{(b)} - \mu(y_j)}{\sigma(y_j)}\right) + \beta, \end{equation}</p> <p>where \(y_j^{(b)}\) denotes the value of the output \(y_j\) on the \(b\)-th input of a batch, \(m\) is the batch size, and \(\beta\) and \(\gamma\) are learned parameters controlling the mean and variance of the output.</p> <p>BatchNorm is also simple to implement, and can be used as a drop-in addition to a standard deep neural net architecture:</p> <p><img src="/images/batchnorm/dropin.jpg" alt="Standard and batch normalized network" /></p> <p>Now, it turns out that neural networks with BatchNorm tend to train faster, and are less sensitive to the choice of hyperparameters. Indeed, below we can see that, compared to its standard (i.e., unnormalized) variant, a VGG network with BatchNorm (on CIFAR10): (a) converges faster (even if we tune the learning rate in the unnormalized case), and (b) successfully trains even for learning rates for which the standard variant diverges.</p> <p><img src="/images/batchnorm/vgg_bn_good_train.jpg" alt="Performance at different learning rates" /></p> <p>In light of the above, it should not come as a surprise that BatchNorm has gained enormous popularity. The <a href="https://arxiv.org/abs/1502.03167">BatchNorm paper</a> has over seven thousand citations (and counting), and is included by default in almost all “prepackaged” deep learning model libraries.</p> <p>Despite this pervasiveness, however, we still lack a good grasp of why exactly BatchNorm works. Specifically, we don’t have a concrete understanding of the mechanism which drives its effectiveness in neural network training.</p> <h2 id="the-story-so-far">The story so far</h2> <p><a href="https://arxiv.org/abs/1502.03167">The original BatchNorm paper</a> motivates batch normalization by tying it to a phenomenon known as <em>internal covariate shift</em>. To understand this phenomenon, recall that training a deep network can be viewed as solving a collection of separate optimization problems - each one corresponding to training a different layer:</p> <p><img src="/images/batchnorm/layerbased.jpg" alt="An optimization problem at each layer" /></p> <p>Now, during training, each step involves updating each of the layers simultaneously. As a result, updates to earlier layers cause changes in the input distributions of later layers. This implies that the optimization problems solved by subsequent layers change at each step.</p> <p>These changes are what is referred to as <em>internal covariate shift</em>.</p> <p>The hypothesis put forth in the BatchNorm paper is that such constant changes in layers’ input distribution force the corresponding optimization processes to continually adapt, thereby hampering convergence. And batch normalization was proposed exactly to alleviate this effect, i.e., to reduce internal covariate shift (by controlling the mean and variance of input distributions), thus allowing for faster convergence.</p> <h3 id="a-closer-look-at-internal-covariate-shift">A closer look at internal covariate shift</h3> <p>At first glance, the above motivation seems intuitive and appealing—but is it actually the main factor behind BatchNorm’s effectiveness? In our <a href="https://arxiv.org/abs/1805.11604">recent paper</a>, we investigate this question.</p> <p>Our point of start is taking a closer look at training a deep neural network—how does training with and without BatchNorm differ? In particular, can we capture the resulting reduction in internal covariate shift?</p> <p>To this end, let us examine both the standard and batch normalized variants of the VGG network, and plot a histogram of activations at various layers. (Note that each distribution constitutes the input of the optimization problem corresponding to the subsequent layer.)</p> <p><img alt="Activations of BatchNorm vs unnormalized network" src="/images/batchnorm/vgg_bn_good.jpg" style="width:50%;" /></p> <p>We can see that the activations of the batch normalized network seem controlled, and relatively stable between iterations. This is as expected. What is less expected, however, is that when we look at the network without BatchNorm, we don’t see too much instability either. Even without explicitly controlling the mean and variance, the activations seem fairly consistent.</p> <p>So, is reduction of internal covariate shift really the key phenomenon at play here?</p> <p>To tackle this question, let us first go back and take another look at our core premise: is the notion of internal covariate shift we discussed above actually detrimental to optimization?</p> <h2 id="internal-covariate-shift--optimization-a-simple-experiment">Internal covariate shift & optimization: a simple experiment</h2> <p>Thus far, our focus was on reducing internal covariate shift (or removing it altogether): what if instead, we actually <em>increased</em> it? In particular, in a <em>batch normalized</em> network, what if we add non-stationary Gaussian noise (with a randomly sampled mean and variance at each iteration) to the <em>outputs</em> of the BatchNorm layer? (Note that by doing this, we explicitly remove the control that BatchNorm typically has over the mean and variance of layer inputs.) Below, we visualize the activations of such “noisy BatchNorm” (using the same methodology we used earlier):</p> <p><img alt="Activations of BatchNorm vs unnormalized vs noisy BatchNorm" src="/images/batchnorm/noisy_bn.jpg" style="width:100%;" /></p> <p>We can see that the activations of “noisy” BatchNorm are noticeably unstable, even more so than those of an unnormalized network. (Plotting the mean and variance of these distributions makes this even more apparent.)</p> <p>Remarkably, however, optimization performance seems to be unaffected. Indeed, in the graph below, we see that a network with noisy BatchNorm converges significantly faster compared to the standard network (and around the same speed as a network with the conventional batch normalization):</p> <p><img alt="Performance of BatchNorm vs unnormalized vs noisy BatchNorm" src="/images/batchnorm/vgg_noise_grid_perf.jpg" style="width:50%;" /></p> <p>The above experiment may bring into question the connection between internal covariate shift and optimization performance. So, maybe there is a better way to cast this connection?</p> <h2 id="an-optimization-based-view-on-internal-covariate-shift">An optimization-based view on internal covariate shift</h2> <p>The original interpretation of internal covariate shift focuses on the distributional view of the stability the inputs. However, since our interest revolves around optimization, perhaps there is a different notion of stability—a one more closely connected to optimization—that BatchNorm enforces?</p> <p>Recall that the intuition suggesting a connection between internal covariate shift and optimization was that constant changes to preceding layers interfere with the layer’s convergence. Thus, a natural way to quantify these changes would be via measuring the extent to which they affect the corresponding optimization problem. More specifically, given that our networks are optimized using (stochastic) first-order methods, the object of interest would be the gradient. This leads to an alternative, “optimization-oriented” notion of internal covariate shift. This notion defines internal covariate shift as the change in gradient direction of a layer caused by updates to preceding layers:</p> <p><span style="background: #DDD; display: block;padding: 20px;"> <strong>Definition.</strong> If \(w_{1:n}\) and \(w'_{1:n}\) are the parameters of an \(n\)-layer network before and after a single gradient update (respectively), then we measure the (optimization-based) internal covariate shift at layer \(k\) as \begin{equation} ||\nabla_{w_k}\mathcal{L}(w_{1:n}) - \nabla_{w_k} \mathcal{L}(w’_{1:k-1}, w_{k:n})||, \end{equation} where \(\mathcal{L}\) is the loss of the network. </span></p> <p>Note that this definition quantifies exactly the discrepancy that was hypothesized to negatively impact optimization. Consequently, if BatchNorm is indeed reducing the impact of preceding layer updates on layer optimization, one would expect the above, optimization-based notion of internal covariate shift to be noticeably smaller in batch-normalized networks.</p> <p>Surprisingly, when we measure the corresponding change in gradients (both in \(\ell_2\) norm and cosine distance), we find that this is not the case:</p> <p><img src="/images/batchnorm/vgg_no_cs.jpg" alt="Measuring internal covariate shift" /></p> <p>In fact, the change in gradients resulting from preceding layer updates appears to be virtually identical between standard and batch-normalized networks. (This persists, and is sometimes even more pronounced, for other learning rates and architectures.) Nevertheless, in all these experiments the batch-normalized network consistently achieves significantly faster convergence, as usual.</p> <h2 id="the-impact-of-batch-normalization">The impact of batch normalization</h2> <p>The above considerations might have undermined our confidence in batch normalization as a reliable technique. But BatchNorm <em>is</em> (reliably) effective. So, can we uncover the roots of this effectiveness?</p> <p>Since we were unable to substantiate the connection between reduction of internal covariate shift and the effectiveness of BatchNorm, let us take a step back and approach the question from first principles.</p> <p>After all, our overarching goal is to understand how batch normalization affects the training performance. It would be thus natural to directly examine the effect that BatchNorm has on the corresponding optimization <em>landscape</em>. To this end, recall that our training is performed using gradient descent method and this method draws on the first-order optimization paradigm. In this paradigm, we use the local linear approximation of the loss around the current solution to identify the best update step to take. Consequently, the performance of these algorithms is largely determined by how predictive of the nearby loss landscape this local approximation is.</p> <p>Let us thus take a closer look at this question and, in particular, analyze the effect of batch normalization on that predictiveness. More precisely, for a given point (solution) on the training trajectory, we explore the landscape along the direction of the current gradient (which is exactly the direction followed by the optimization process). Concretely, we want to measure:</p> <ol> <li>Variation of the value of the loss: \begin{equation} \mathcal{L}(x + \eta\nabla \mathcal{L}(x)), \qquad \eta \in [0.05, 0.4]. \end{equation}</li> <li>Gradient predictiveness, i.e., the change of the loss gradient: \begin{equation} ||\nabla \mathcal{L}(x) - \nabla \mathcal{L}(x+\eta\nabla \mathcal{L}(x))||, \qquad \eta \in [0.05, 0.4]. \end{equation}</li> </ol> <p>(Note that the typical learning rate used to train the network is \(0.1\).)</p> <p>Below we plot the range of these quantities over the corresponding intervals at different points of the training trajectory:</p> <p><img src="/images/batchnorm/landscapes.jpg" alt="BatchNorm Loss Landscape" /></p> <p>As we can see, adding a BatchNorm layer has a <em>profound</em> impact across both our metrics.</p> <p>This impact might finally hint at the possible roots of BatchNorm’s effectiveness. After all, a small variability of the loss indicates that the steps taken during training are unlikely to drive the loss uncontrollably high. (This variability also reflects, in a way, the <a href="http://www.cs.cornell.edu/courses/cs6820/2016fa/handouts/dscnt.jpeg">Lipschitzness</a> of the loss.) Similarly, good gradient predictiveness implies that the gradient evaluated at a given point stays relevant over longer distances, hence allowing for larger step sizes. (This predictiveness can be seen as related to the <a href="http://www.cs.cornell.edu/courses/cs6820/2016fa/handouts/dscnt.jpeg">\(\beta\)-smoothness</a> of the loss.)</p> <p>So, the “smoothing” effect of BatchNorm makes the optimization landscape much easier to navigate, which could explain the faster convergence and robustness to hyperparameters observed in practice.</p> <h2 id="batchnorm-reparametrization">BatchNorm reparametrization</h2> <p>The above smoothing effect of BatchNorm was evident in all experiments we performed. Can we understand though what is the fundamental phenomenon underlying it?</p> <p>To this end, we formally analyzed a scenario where we add a single BatchNorm layer after a single layer of a deep network, and compared it to a case without BatchNorm:</p> <p><img src="/images/batchnorm/bn_schematic.jpg" alt="Theory setup schematic" /></p> <p>Note that this setup is quite general, since the input \(x\) could be the output of (an arbitrary number of) previous layers and the loss \(\mathcal{L}\) might incorporate an arbitrary number of subsequent layers.</p> <p>We prove that BatchNorm effectively reparametrizes the training problem, making it more amenable to first-order methods. Specifically, batch normalization makes the optimization wrt the activations \(y\) easier. This, in turn, translates into improved (worst-case) bounds for the actual optimization problem (which is wrt the weights \(W\) and not the activations \(y\)).</p> <p>More precisely, by unraveling the exact backwards pass induced by BatchNorm layer, we show that</p> <p><span style="display: block; background: #DDD; padding: 20px;"> <strong>Theorem 1.</strong> Let \(g = \nabla_y \mathcal{L}\) be the gradient of the loss \(\mathcal{L}\) wrt a batch of activations \(y\), and let \(\widehat{g} = \nabla_y \widehat{\mathcal{L}}\) be analogously defined for the network with (a single) BatchNorm layer. <br /><br /> We have that <br /> \begin{equation} ||\widehat{g}||^2 \leq \frac{\gamma^2}{\sigma_j^2}\left(||g||^2 - \mu(g)^2 - \frac{1}{\sqrt{m}}\langle g, \widehat{y}\rangle^2\right). \end{equation} <br /> </span></p> <p>So, indeed, inserting the BatchNorm layer reduces the Lipschitz constant of the loss wrt \(y\). The above bound can be also translated into a bound wrt the weights \(W\) (which is what the optimization process actually corresponds to) in the setting where the inputs are set in a worst-case manner, i.e., so as to maximize the resulting Lipschitz constant. (We consider this setting to rule out certain pathological cases, which can, in principle, arise as we are making no assumptions about the input distribution. Alternatively, we could just assume the inputs are Gaussian.)</p> <p>We can also corroborate our observation that batch normalization makes gradients more predictive. To this end, recall the <a href="https://en.wikipedia.org/wiki/Hessian_matrix#Use_in_optimization">Taylor series</a> expansion of a function around a value \(x\): \begin{equation} \mathcal{L}(y+\Delta y) = \mathcal{L}(y) + \nabla \mathcal{L}(y)^\top \Delta y + \frac{1}{2}(\Delta y)^\top H (\Delta y) + o(||\Delta y||^2), \end{equation} where \(H\) is the Hessian matrix of the loss, and in our case \(\Delta y\) is a step in the gradient direction. We can prove that, under some mild assumptions, adding a BatchNorm layer makes the second-order term in that expansion smaller, thus increasing the radius in which the first-order term (the gradient) is predictive:</p> <p><span style="display: block; background: #DDD; padding: 20px;"> <strong>Theorem 2.</strong> Let \(H\) be the Hessian matrix of the loss wrt the a batch of activations \(y\) in the standard network and, again, let \(\widehat{H}\) be defined analogously for the network with (a single) BatchNorm layer inserted. <br /><br /> We have that <br /> \begin{equation} \widehat{g}^\top \widehat{H} \widehat{g} \leq \frac{\gamma^2}{\sigma^2}\left( g^\top H g - \frac{1}{m\gamma}\langle g, \widehat{y} \rangle ||\widehat{g}||^2 \right) \end{equation} <br /> </span></p> <p>Again, this result can be translates into an analogous bound wrt \(W\) in the setting where the inputs are chosen in a worst-case manner.</p> <h2 id="looking-forward">Looking forward</h2> <p>Our investigation so far has shed some light on the possible roots of BatchNorm’s effectiveness, but there is still much to be understood. In particular, while we identified the increased smoothness of the optimization landscape as a direct result of employing BatchNorm layers, we still lack a full grasp of how this impacts the actual training process.</p> <p>Moreover, in our considerations we completely ignored the positive impact that batch normalization has on generalization. Empirically, BatchNorm often improves test accuracy by a few percentage points. Can we uncover the precise mechanism behind this phenomenon?</p> <p>More broadly, we hope that our work motivates us all to take a closer look at other elements of our deep learning toolkit. After all, only once we attain a meaningful and more fine-grained understanding of that toolkit we will know the ultimate power and fundamental limitations of the techniques we use.</p> <p><em>P.S. We will be at NeurIPS’18! Check out our <a href="https://www.youtube.com/watch?v=ZOabsYbmBRM&feature=youtu.be">3-minute video</a> and our <a href="https://nips.cc/Conferences/2018/Schedule?showEvent=12601">talk</a> on Tuesday.</em></p> </article> <div class="generic-box"> Subscribe to our <a href="/feed.xml">RSS feed</a>. <div class="share-page"> <div class="share-links"> Spread the word: <a class="fa fa-facebook" href="https://facebook.com/sharer.php?u=https%3A%2F%2Fgradientscience.org%2Fbatchnorm%2F" rel="nofollow" target="_blank" title="Share on Facebook"></a> <a class="fa fa-twitter" href="https://twitter.com/intent/tweet?text=How+does+Batch+Normalization+Help+Optimization%3F&url=https%3A%2F%2Fgradientscience.org%2Fbatchnorm%2F" rel="nofollow" target="_blank" title="Share on Twitter"></a> <a class="fa fa-google-plus" href="https://plus.google.com/share?url=https%3A%2F%2Fgradientscience.org%2Fbatchnorm%2F" rel="nofollow" target="_blank" title="Share on Google+"></a> <a class="fa fa-reddit" href="http://reddit.com/submit?url=https%3A%2F%2Fgradientscience.org%2Fbatchnorm%2F&title=How+does+Batch+Normalization+Help+Optimization%3F" rel="nofollow" target="_blank" title="Share on Reddit"></a> <a class="fa fa-hacker-news" onclick="parent.postMessage('submit','*')" href="https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fgradientscience.org%2Fbatchnorm%2F&t=How+does+Batch+Normalization+Help+Optimization%3F" rel="nofollow" target="_blank" title="Share on Hacker News"></a> </div> </div> </div> <div id="disqus_thread"></div> <script type="text/javascript"> var disqus_shortname = 'madrylab-blog'; var disqus_identifier = '/batch_normalization'; var disqus_title = "How does Batch Normalization Help Optimization?"; (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript> </div> </div> </div> <footer class="center"> <div class="measure"> <small> Theme available on <a href="https://github.com/johnotander/pixyll">GitHub</a>. </small> <footer> <a href="http://accessibility.mit.edu">Accessibility</a> </footer> </div> </footer> </body> </html>