CINXE.COM

Masked image modeling with Autoencoders

<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="description" content="Keras documentation"> <meta name="author" content="Keras Team"> <link rel="shortcut icon" href="https://keras.io/img/favicon.ico"> <link rel="canonical" href="https://keras.io/examples/vision/masked_image_modeling/" /> <!-- Social --> <meta property="og:title" content="Keras documentation: Masked image modeling with Autoencoders"> <meta property="og:image" content="https://keras.io/img/logo-k-keras-wb.png"> <meta name="twitter:title" content="Keras documentation: Masked image modeling with Autoencoders"> <meta name="twitter:image" content="https://keras.io/img/k-keras-social.png"> <meta name="twitter:card" content="summary"> <title>Masked image modeling with Autoencoders</title> <!-- Bootstrap core CSS --> <link href="/css/bootstrap.min.css" rel="stylesheet"> <!-- Custom fonts for this template --> <link href="https://fonts.googleapis.com/css2?family=Open+Sans:wght@400;600;700;800&display=swap" rel="stylesheet"> <!-- Custom styles for this template --> <link href="/css/docs.css" rel="stylesheet"> <link href="/css/monokai.css" rel="stylesheet"> <!-- Google Tag Manager --> <script>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0], j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src= 'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f); })(window,document,'script','dataLayer','GTM-5DNGF4N'); </script> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-175165319-128', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Tag Manager --> <script async defer src="https://buttons.github.io/buttons.js"></script> </head> <body> <!-- Google Tag Manager (noscript) --> <noscript><iframe src="https://www.googletagmanager.com/ns.html?id=GTM-5DNGF4N" height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript> <!-- End Google Tag Manager (noscript) --> <div class='k-page'> <div class="k-nav" id="nav-menu"> <a href='/'><img src='/img/logo-small.png' class='logo-small' /></a> <div class="nav flex-column nav-pills" role="tablist" aria-orientation="vertical"> <a class="nav-link" href="/about/" role="tab" aria-selected="">About Keras</a> <a class="nav-link" href="/getting_started/" role="tab" aria-selected="">Getting started</a> <a class="nav-link" href="/guides/" role="tab" aria-selected="">Developer guides</a> <a class="nav-link active" href="/examples/" role="tab" aria-selected="">Code examples</a> <a class="nav-sublink active" href="/examples/vision/">Computer Vision</a> <a class="nav-sublink2" href="/examples/vision/image_classification_from_scratch/">Image classification from scratch</a> <a class="nav-sublink2" href="/examples/vision/mnist_convnet/">Simple MNIST convnet</a> <a class="nav-sublink2" href="/examples/vision/image_classification_efficientnet_fine_tuning/">Image classification via fine-tuning with EfficientNet</a> <a class="nav-sublink2" href="/examples/vision/image_classification_with_vision_transformer/">Image classification with Vision Transformer</a> <a class="nav-sublink2" href="/examples/vision/attention_mil_classification/">Classification using Attention-based Deep Multiple Instance Learning</a> <a class="nav-sublink2" href="/examples/vision/mlp_image_classification/">Image classification with modern MLP models</a> <a class="nav-sublink2" href="/examples/vision/mobilevit/">A mobile-friendly Transformer-based model for image classification</a> <a class="nav-sublink2" href="/examples/vision/xray_classification_with_tpus/">Pneumonia Classification on TPU</a> <a class="nav-sublink2" href="/examples/vision/cct/">Compact Convolutional Transformers</a> <a class="nav-sublink2" href="/examples/vision/convmixer/">Image classification with ConvMixer</a> <a class="nav-sublink2" href="/examples/vision/eanet/">Image classification with EANet (External Attention Transformer)</a> <a class="nav-sublink2" href="/examples/vision/involution/">Involutional neural networks</a> <a class="nav-sublink2" href="/examples/vision/perceiver_image_classification/">Image classification with Perceiver</a> <a class="nav-sublink2" href="/examples/vision/reptile/">Few-Shot learning with Reptile</a> <a class="nav-sublink2" href="/examples/vision/semisupervised_simclr/">Semi-supervised image classification using contrastive pretraining with SimCLR</a> <a class="nav-sublink2" href="/examples/vision/swin_transformers/">Image classification with Swin Transformers</a> <a class="nav-sublink2" href="/examples/vision/vit_small_ds/">Train a Vision Transformer on small datasets</a> <a class="nav-sublink2" href="/examples/vision/shiftvit/">A Vision Transformer without Attention</a> <a class="nav-sublink2" href="/examples/vision/image_classification_using_global_context_vision_transformer/">Image Classification using Global Context Vision Transformer</a> <a class="nav-sublink2" href="/examples/vision/temporal_latent_bottleneck/">When Recurrence meets Transformers</a> <a class="nav-sublink2" href="/examples/vision/oxford_pets_image_segmentation/">Image segmentation with a U-Net-like architecture</a> <a class="nav-sublink2" href="/examples/vision/deeplabv3_plus/">Multiclass semantic segmentation using DeepLabV3+</a> <a class="nav-sublink2" href="/examples/vision/basnet_segmentation/">Highly accurate boundaries segmentation using BASNet</a> <a class="nav-sublink2" href="/examples/vision/fully_convolutional_network/">Image Segmentation using Composable Fully-Convolutional Networks</a> <a class="nav-sublink2" href="/examples/vision/retinanet/">Object Detection with RetinaNet</a> <a class="nav-sublink2" href="/examples/vision/keypoint_detection/">Keypoint Detection with Transfer Learning</a> <a class="nav-sublink2" href="/examples/vision/object_detection_using_vision_transformer/">Object detection with Vision Transformers</a> <a class="nav-sublink2" href="/examples/vision/3D_image_classification/">3D image classification from CT scans</a> <a class="nav-sublink2" href="/examples/vision/depth_estimation/">Monocular depth estimation</a> <a class="nav-sublink2" href="/examples/vision/nerf/">3D volumetric rendering with NeRF</a> <a class="nav-sublink2" href="/examples/vision/pointnet_segmentation/">Point cloud segmentation with PointNet</a> <a class="nav-sublink2" href="/examples/vision/pointnet/">Point cloud classification</a> <a class="nav-sublink2" href="/examples/vision/captcha_ocr/">OCR model for reading Captchas</a> <a class="nav-sublink2" href="/examples/vision/handwriting_recognition/">Handwriting recognition</a> <a class="nav-sublink2" href="/examples/vision/autoencoder/">Convolutional autoencoder for image denoising</a> <a class="nav-sublink2" href="/examples/vision/mirnet/">Low-light image enhancement using MIRNet</a> <a class="nav-sublink2" href="/examples/vision/super_resolution_sub_pixel/">Image Super-Resolution using an Efficient Sub-Pixel CNN</a> <a class="nav-sublink2" href="/examples/vision/edsr/">Enhanced Deep Residual Networks for single-image super-resolution</a> <a class="nav-sublink2" href="/examples/vision/zero_dce/">Zero-DCE for low-light image enhancement</a> <a class="nav-sublink2" href="/examples/vision/cutmix/">CutMix data augmentation for image classification</a> <a class="nav-sublink2" href="/examples/vision/mixup/">MixUp augmentation for image classification</a> <a class="nav-sublink2" href="/examples/vision/randaugment/">RandAugment for Image Classification for Improved Robustness</a> <a class="nav-sublink2" href="/examples/vision/image_captioning/">Image captioning</a> <a class="nav-sublink2" href="/examples/vision/nl_image_search/">Natural language image search with a Dual Encoder</a> <a class="nav-sublink2" href="/examples/vision/visualizing_what_convnets_learn/">Visualizing what convnets learn</a> <a class="nav-sublink2" href="/examples/vision/integrated_gradients/">Model interpretability with Integrated Gradients</a> <a class="nav-sublink2" href="/examples/vision/probing_vits/">Investigating Vision Transformer representations</a> <a class="nav-sublink2" href="/examples/vision/grad_cam/">Grad-CAM class activation visualization</a> <a class="nav-sublink2" href="/examples/vision/near_dup_search/">Near-duplicate image search</a> <a class="nav-sublink2" href="/examples/vision/semantic_image_clustering/">Semantic Image Clustering</a> <a class="nav-sublink2" href="/examples/vision/siamese_contrastive/">Image similarity estimation using a Siamese Network with a contrastive loss</a> <a class="nav-sublink2" href="/examples/vision/siamese_network/">Image similarity estimation using a Siamese Network with a triplet loss</a> <a class="nav-sublink2" href="/examples/vision/metric_learning/">Metric learning for image similarity search</a> <a class="nav-sublink2" href="/examples/vision/metric_learning_tf_similarity/">Metric learning for image similarity search using TensorFlow Similarity</a> <a class="nav-sublink2" href="/examples/vision/nnclr/">Self-supervised contrastive learning with NNCLR</a> <a class="nav-sublink2" href="/examples/vision/video_classification/">Video Classification with a CNN-RNN Architecture</a> <a class="nav-sublink2" href="/examples/vision/conv_lstm/">Next-Frame Video Prediction with Convolutional LSTMs</a> <a class="nav-sublink2" href="/examples/vision/video_transformers/">Video Classification with Transformers</a> <a class="nav-sublink2" href="/examples/vision/vivit/">Video Vision Transformer</a> <a class="nav-sublink2" href="/examples/vision/bit/">Image Classification using BigTransfer (BiT)</a> <a class="nav-sublink2" href="/examples/vision/gradient_centralization/">Gradient Centralization for Better Training Performance</a> <a class="nav-sublink2" href="/examples/vision/token_learner/">Learning to tokenize in Vision Transformers</a> <a class="nav-sublink2" href="/examples/vision/knowledge_distillation/">Knowledge Distillation</a> <a class="nav-sublink2" href="/examples/vision/fixres/">FixRes: Fixing train-test resolution discrepancy</a> <a class="nav-sublink2" href="/examples/vision/cait/">Class Attention Image Transformers with LayerScale</a> <a class="nav-sublink2" href="/examples/vision/patch_convnet/">Augmenting convnets with aggregated attention</a> <a class="nav-sublink2" href="/examples/vision/learnable_resizer/">Learning to Resize</a> <a class="nav-sublink2" href="/examples/vision/adamatch/">Semi-supervision and domain adaptation with AdaMatch</a> <a class="nav-sublink2" href="/examples/vision/barlow_twins/">Barlow Twins for Contrastive SSL</a> <a class="nav-sublink2" href="/examples/vision/consistency_training/">Consistency training with supervision</a> <a class="nav-sublink2" href="/examples/vision/deit/">Distilling Vision Transformers</a> <a class="nav-sublink2" href="/examples/vision/focal_modulation_network/">Focal Modulation: A replacement for Self-Attention</a> <a class="nav-sublink2" href="/examples/vision/forwardforward/">Using the Forward-Forward Algorithm for Image Classification</a> <a class="nav-sublink2 active" href="/examples/vision/masked_image_modeling/">Masked image modeling with Autoencoders</a> <a class="nav-sublink2" href="/examples/vision/sam/">Segment Anything Model with 🤗Transformers</a> <a class="nav-sublink2" href="/examples/vision/segformer/">Semantic segmentation with SegFormer and Hugging Face Transformers</a> <a class="nav-sublink2" href="/examples/vision/simsiam/">Self-supervised contrastive learning with SimSiam</a> <a class="nav-sublink2" href="/examples/vision/supervised-contrastive-learning/">Supervised Contrastive Learning</a> <a class="nav-sublink2" href="/examples/vision/yolov8/">Efficient Object Detection with YOLOV8 and KerasCV</a> <a class="nav-sublink" href="/examples/nlp/">Natural Language Processing</a> <a class="nav-sublink" href="/examples/structured_data/">Structured Data</a> <a class="nav-sublink" href="/examples/timeseries/">Timeseries</a> <a class="nav-sublink" href="/examples/generative/">Generative Deep Learning</a> <a class="nav-sublink" href="/examples/audio/">Audio Data</a> <a class="nav-sublink" href="/examples/rl/">Reinforcement Learning</a> <a class="nav-sublink" href="/examples/graph/">Graph Data</a> <a class="nav-sublink" href="/examples/keras_recipes/">Quick Keras Recipes</a> <a class="nav-link" href="/api/" role="tab" aria-selected="">Keras 3 API documentation</a> <a class="nav-link" href="/2.18/api/" role="tab" aria-selected="">Keras 2 API documentation</a> <a class="nav-link" href="/keras_tuner/" role="tab" aria-selected="">KerasTuner: Hyperparam Tuning</a> <a class="nav-link" href="/keras_hub/" role="tab" aria-selected="">KerasHub: Pretrained Models</a> </div> </div> <div class='k-main'> <div class='k-main-top'> <script> function displayDropdownMenu() { e = document.getElementById("nav-menu"); if (e.style.display == "block") { e.style.display = "none"; } else { e.style.display = "block"; document.getElementById("dropdown-nav").style.display = "block"; } } function resetMobileUI() { if (window.innerWidth <= 840) { document.getElementById("nav-menu").style.display = "none"; document.getElementById("dropdown-nav").style.display = "block"; } else { document.getElementById("nav-menu").style.display = "block"; document.getElementById("dropdown-nav").style.display = "none"; } var navmenu = document.getElementById("nav-menu"); var menuheight = navmenu.clientHeight; var kmain = document.getElementById("k-main-id"); kmain.style.minHeight = (menuheight + 100) + 'px'; } window.onresize = resetMobileUI; window.addEventListener("load", (event) => { resetMobileUI() }); </script> <div id='dropdown-nav' onclick="displayDropdownMenu();"> <svg viewBox="-20 -20 120 120" width="60" height="60"> <rect width="100" height="20"></rect> <rect y="30" width="100" height="20"></rect> <rect y="60" width="100" height="20"></rect> </svg> </div> <form class="bd-search d-flex align-items-center k-search-form" id="search-form"> <input type="search" class="k-search-input" id="search-input" placeholder="Search Keras documentation..." aria-label="Search Keras documentation..." autocomplete="off"> <button class="k-search-btn"> <svg width="13" height="13" viewBox="0 0 13 13"><title>search</title><path d="m4.8495 7.8226c0.82666 0 1.5262-0.29146 2.0985-0.87438 0.57232-0.58292 0.86378-1.2877 0.87438-2.1144 0.010599-0.82666-0.28086-1.5262-0.87438-2.0985-0.59352-0.57232-1.293-0.86378-2.0985-0.87438-0.8055-0.010599-1.5103 0.28086-2.1144 0.87438-0.60414 0.59352-0.8956 1.293-0.87438 2.0985 0.021197 0.8055 0.31266 1.5103 0.87438 2.1144 0.56172 0.60414 1.2665 0.8956 2.1144 0.87438zm4.4695 0.2115 3.681 3.6819-1.259 1.284-3.6817-3.7 0.0019784-0.69479-0.090043-0.098846c-0.87973 0.76087-1.92 1.1413-3.1207 1.1413-1.3553 0-2.5025-0.46363-3.4417-1.3909s-1.4088-2.0686-1.4088-3.4239c0-1.3553 0.4696-2.4966 1.4088-3.4239 0.9392-0.92727 2.0864-1.3969 3.4417-1.4088 1.3553-0.011889 2.4906 0.45771 3.406 1.4088 0.9154 0.95107 1.379 2.0924 1.3909 3.4239 0 1.2126-0.38043 2.2588-1.1413 3.1385l0.098834 0.090049z"></path></svg> </button> </form> <script> var form = document.getElementById('search-form'); form.onsubmit = function(e) { e.preventDefault(); var query = document.getElementById('search-input').value; window.location.href = '/search.html?query=' + query; return False } </script> </div> <div class='k-main-inner' id='k-main-id'> <div class='k-location-slug'> <span class="k-location-slug-pointer">►</span> <a href='/examples/'>Code examples</a> / <a href='/examples/vision/'>Computer Vision</a> / Masked image modeling with Autoencoders </div> <div class='k-content'> <h1 id="masked-image-modeling-with-autoencoders">Masked image modeling with Autoencoders</h1> <p><strong>Author:</strong> <a href="https://twitter.com/arig23498">Aritra Roy Gosthipaty</a>, <a href="https://twitter.com/RisingSayak">Sayak Paul</a><br> <strong>Date created:</strong> 2021/12/20<br> <strong>Last modified:</strong> 2021/12/21<br> <strong>Description:</strong> Implementing Masked Autoencoders for self-supervised pretraining.</p> <div class='example_version_banner keras_2'>ⓘ This example uses Keras 2</div> <p><img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> <a href="https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/masked_image_modeling.ipynb"><strong>View in Colab</strong></a> <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> <a href="https://github.com/keras-team/keras-io/blob/master/examples/vision/masked_image_modeling.py"><strong>GitHub source</strong></a></p> <hr /> <h2 id="introduction">Introduction</h2> <p>In deep learning, models with growing <strong>capacity</strong> and <strong>capability</strong> can easily overfit on large datasets (ImageNet-1K). In the field of natural language processing, the appetite for data has been <strong>successfully addressed</strong> by self-supervised pretraining.</p> <p>In the academic paper <a href="https://arxiv.org/abs/2111.06377">Masked Autoencoders Are Scalable Vision Learners</a> by He et. al. the authors propose a simple yet effective method to pretrain large vision models (here <a href="https://arxiv.org/abs/2010.11929">ViT Huge</a>). Inspired from the pretraining algorithm of BERT (<a href="https://arxiv.org/abs/1810.04805">Devlin et al.</a>), they mask patches of an image and, through an autoencoder predict the masked patches. In the spirit of "masked language modeling", this pretraining task could be referred to as "masked image modeling".</p> <p>In this example, we implement <a href="https://arxiv.org/abs/2111.06377">Masked Autoencoders Are Scalable Vision Learners</a> with the <a href="https://www.cs.toronto.edu/~kriz/cifar.html">CIFAR-10</a> dataset. After pretraining a scaled down version of ViT, we also implement the linear evaluation pipeline on CIFAR-10.</p> <p>This implementation covers (MAE refers to Masked Autoencoder):</p> <ul> <li>The masking algorithm</li> <li>MAE encoder</li> <li>MAE decoder</li> <li>Evaluation with linear probing</li> </ul> <p>As a reference, we reuse some of the code presented in <a href="https://keras.io/examples/vision/image_classification_with_vision_transformer/">this example</a>.</p> <hr /> <h2 id="imports">Imports</h2> <div class="codehilite"><pre><span></span><code><span class="kn">import</span><span class="w"> </span><span class="nn">os</span> <span class="n">os</span><span class="o">.</span><span class="n">environ</span><span class="p">[</span><span class="s2">&quot;KERAS_BACKEND&quot;</span><span class="p">]</span> <span class="o">=</span> <span class="s2">&quot;tensorflow&quot;</span> <span class="kn">import</span><span class="w"> </span><span class="nn">tensorflow</span><span class="w"> </span><span class="k">as</span><span class="w"> </span><span class="nn">tf</span> <span class="kn">import</span><span class="w"> </span><span class="nn">keras</span> <span class="kn">from</span><span class="w"> </span><span class="nn">keras</span><span class="w"> </span><span class="kn">import</span> <span class="n">layers</span> <span class="kn">import</span><span class="w"> </span><span class="nn">matplotlib.pyplot</span><span class="w"> </span><span class="k">as</span><span class="w"> </span><span class="nn">plt</span> <span class="kn">import</span><span class="w"> </span><span class="nn">numpy</span><span class="w"> </span><span class="k">as</span><span class="w"> </span><span class="nn">np</span> <span class="kn">import</span><span class="w"> </span><span class="nn">random</span> <span class="c1"># Setting seeds for reproducibility.</span> <span class="n">SEED</span> <span class="o">=</span> <span class="mi">42</span> <span class="n">keras</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">set_random_seed</span><span class="p">(</span><span class="n">SEED</span><span class="p">)</span> </code></pre></div> <hr /> <h2 id="hyperparameters-for-pretraining">Hyperparameters for pretraining</h2> <p>Please feel free to change the hyperparameters and check your results. The best way to get an intuition about the architecture is to experiment with it. Our hyperparameters are heavily inspired by the design guidelines laid out by the authors in <a href="https://arxiv.org/abs/2111.06377">the original paper</a>.</p> <div class="codehilite"><pre><span></span><code><span class="c1"># DATA</span> <span class="n">BUFFER_SIZE</span> <span class="o">=</span> <span class="mi">1024</span> <span class="n">BATCH_SIZE</span> <span class="o">=</span> <span class="mi">256</span> <span class="n">AUTO</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">data</span><span class="o">.</span><span class="n">AUTOTUNE</span> <span class="n">INPUT_SHAPE</span> <span class="o">=</span> <span class="p">(</span><span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">3</span><span class="p">)</span> <span class="n">NUM_CLASSES</span> <span class="o">=</span> <span class="mi">10</span> <span class="c1"># OPTIMIZER</span> <span class="n">LEARNING_RATE</span> <span class="o">=</span> <span class="mf">5e-3</span> <span class="n">WEIGHT_DECAY</span> <span class="o">=</span> <span class="mf">1e-4</span> <span class="c1"># PRETRAINING</span> <span class="n">EPOCHS</span> <span class="o">=</span> <span class="mi">100</span> <span class="c1"># AUGMENTATION</span> <span class="n">IMAGE_SIZE</span> <span class="o">=</span> <span class="mi">48</span> <span class="c1"># We will resize input images to this size.</span> <span class="n">PATCH_SIZE</span> <span class="o">=</span> <span class="mi">6</span> <span class="c1"># Size of the patches to be extracted from the input images.</span> <span class="n">NUM_PATCHES</span> <span class="o">=</span> <span class="p">(</span><span class="n">IMAGE_SIZE</span> <span class="o">//</span> <span class="n">PATCH_SIZE</span><span class="p">)</span> <span class="o">**</span> <span class="mi">2</span> <span class="n">MASK_PROPORTION</span> <span class="o">=</span> <span class="mf">0.75</span> <span class="c1"># We have found 75% masking to give us the best results.</span> <span class="c1"># ENCODER and DECODER</span> <span class="n">LAYER_NORM_EPS</span> <span class="o">=</span> <span class="mf">1e-6</span> <span class="n">ENC_PROJECTION_DIM</span> <span class="o">=</span> <span class="mi">128</span> <span class="n">DEC_PROJECTION_DIM</span> <span class="o">=</span> <span class="mi">64</span> <span class="n">ENC_NUM_HEADS</span> <span class="o">=</span> <span class="mi">4</span> <span class="n">ENC_LAYERS</span> <span class="o">=</span> <span class="mi">6</span> <span class="n">DEC_NUM_HEADS</span> <span class="o">=</span> <span class="mi">4</span> <span class="n">DEC_LAYERS</span> <span class="o">=</span> <span class="p">(</span> <span class="mi">2</span> <span class="c1"># The decoder is lightweight but should be reasonably deep for reconstruction.</span> <span class="p">)</span> <span class="n">ENC_TRANSFORMER_UNITS</span> <span class="o">=</span> <span class="p">[</span> <span class="n">ENC_PROJECTION_DIM</span> <span class="o">*</span> <span class="mi">2</span><span class="p">,</span> <span class="n">ENC_PROJECTION_DIM</span><span class="p">,</span> <span class="p">]</span> <span class="c1"># Size of the transformer layers.</span> <span class="n">DEC_TRANSFORMER_UNITS</span> <span class="o">=</span> <span class="p">[</span> <span class="n">DEC_PROJECTION_DIM</span> <span class="o">*</span> <span class="mi">2</span><span class="p">,</span> <span class="n">DEC_PROJECTION_DIM</span><span class="p">,</span> <span class="p">]</span> </code></pre></div> <hr /> <h2 id="load-and-prepare-the-cifar10-dataset">Load and prepare the CIFAR-10 dataset</h2> <div class="codehilite"><pre><span></span><code><span class="p">(</span><span class="n">x_train</span><span class="p">,</span> <span class="n">y_train</span><span class="p">),</span> <span class="p">(</span><span class="n">x_test</span><span class="p">,</span> <span class="n">y_test</span><span class="p">)</span> <span class="o">=</span> <span class="n">keras</span><span class="o">.</span><span class="n">datasets</span><span class="o">.</span><span class="n">cifar10</span><span class="o">.</span><span class="n">load_data</span><span class="p">()</span> <span class="p">(</span><span class="n">x_train</span><span class="p">,</span> <span class="n">y_train</span><span class="p">),</span> <span class="p">(</span><span class="n">x_val</span><span class="p">,</span> <span class="n">y_val</span><span class="p">)</span> <span class="o">=</span> <span class="p">(</span> <span class="p">(</span><span class="n">x_train</span><span class="p">[:</span><span class="mi">40000</span><span class="p">],</span> <span class="n">y_train</span><span class="p">[:</span><span class="mi">40000</span><span class="p">]),</span> <span class="p">(</span><span class="n">x_train</span><span class="p">[</span><span class="mi">40000</span><span class="p">:],</span> <span class="n">y_train</span><span class="p">[</span><span class="mi">40000</span><span class="p">:]),</span> <span class="p">)</span> <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Training samples: </span><span class="si">{</span><span class="nb">len</span><span class="p">(</span><span class="n">x_train</span><span class="p">)</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span> <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Validation samples: </span><span class="si">{</span><span class="nb">len</span><span class="p">(</span><span class="n">x_val</span><span class="p">)</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span> <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Testing samples: </span><span class="si">{</span><span class="nb">len</span><span class="p">(</span><span class="n">x_test</span><span class="p">)</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span> <span class="n">train_ds</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">data</span><span class="o">.</span><span class="n">Dataset</span><span class="o">.</span><span class="n">from_tensor_slices</span><span class="p">(</span><span class="n">x_train</span><span class="p">)</span> <span class="n">train_ds</span> <span class="o">=</span> <span class="n">train_ds</span><span class="o">.</span><span class="n">shuffle</span><span class="p">(</span><span class="n">BUFFER_SIZE</span><span class="p">)</span><span class="o">.</span><span class="n">batch</span><span class="p">(</span><span class="n">BATCH_SIZE</span><span class="p">)</span><span class="o">.</span><span class="n">prefetch</span><span class="p">(</span><span class="n">AUTO</span><span class="p">)</span> <span class="n">val_ds</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">data</span><span class="o">.</span><span class="n">Dataset</span><span class="o">.</span><span class="n">from_tensor_slices</span><span class="p">(</span><span class="n">x_val</span><span class="p">)</span> <span class="n">val_ds</span> <span class="o">=</span> <span class="n">val_ds</span><span class="o">.</span><span class="n">batch</span><span class="p">(</span><span class="n">BATCH_SIZE</span><span class="p">)</span><span class="o">.</span><span class="n">prefetch</span><span class="p">(</span><span class="n">AUTO</span><span class="p">)</span> <span class="n">test_ds</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">data</span><span class="o">.</span><span class="n">Dataset</span><span class="o">.</span><span class="n">from_tensor_slices</span><span class="p">(</span><span class="n">x_test</span><span class="p">)</span> <span class="n">test_ds</span> <span class="o">=</span> <span class="n">test_ds</span><span class="o">.</span><span class="n">batch</span><span class="p">(</span><span class="n">BATCH_SIZE</span><span class="p">)</span><span class="o">.</span><span class="n">prefetch</span><span class="p">(</span><span class="n">AUTO</span><span class="p">)</span> </code></pre></div> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code>Training samples: 40000 Validation samples: 10000 Testing samples: 10000 </code></pre></div> </div> <hr /> <h2 id="data-augmentation">Data augmentation</h2> <p>In previous self-supervised pretraining methodologies (<a href="https://arxiv.org/abs/2002.05709">SimCLR</a> alike), we have noticed that the data augmentation pipeline plays an important role. On the other hand the authors of this paper point out that Masked Autoencoders <strong>do not</strong> rely on augmentations. They propose a simple augmentation pipeline of:</p> <ul> <li>Resizing</li> <li>Random cropping (fixed-sized or random sized)</li> <li>Random horizontal flipping</li> </ul> <div class="codehilite"><pre><span></span><code><span class="k">def</span><span class="w"> </span><span class="nf">get_train_augmentation_model</span><span class="p">():</span> <span class="n">model</span> <span class="o">=</span> <span class="n">keras</span><span class="o">.</span><span class="n">Sequential</span><span class="p">(</span> <span class="p">[</span> <span class="n">layers</span><span class="o">.</span><span class="n">Rescaling</span><span class="p">(</span><span class="mi">1</span> <span class="o">/</span> <span class="mf">255.0</span><span class="p">),</span> <span class="n">layers</span><span class="o">.</span><span class="n">Resizing</span><span class="p">(</span><span class="n">INPUT_SHAPE</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">+</span> <span class="mi">20</span><span class="p">,</span> <span class="n">INPUT_SHAPE</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">+</span> <span class="mi">20</span><span class="p">),</span> <span class="n">layers</span><span class="o">.</span><span class="n">RandomCrop</span><span class="p">(</span><span class="n">IMAGE_SIZE</span><span class="p">,</span> <span class="n">IMAGE_SIZE</span><span class="p">),</span> <span class="n">layers</span><span class="o">.</span><span class="n">RandomFlip</span><span class="p">(</span><span class="s2">&quot;horizontal&quot;</span><span class="p">),</span> <span class="p">],</span> <span class="n">name</span><span class="o">=</span><span class="s2">&quot;train_data_augmentation&quot;</span><span class="p">,</span> <span class="p">)</span> <span class="k">return</span> <span class="n">model</span> <span class="k">def</span><span class="w"> </span><span class="nf">get_test_augmentation_model</span><span class="p">():</span> <span class="n">model</span> <span class="o">=</span> <span class="n">keras</span><span class="o">.</span><span class="n">Sequential</span><span class="p">(</span> <span class="p">[</span> <span class="n">layers</span><span class="o">.</span><span class="n">Rescaling</span><span class="p">(</span><span class="mi">1</span> <span class="o">/</span> <span class="mf">255.0</span><span class="p">),</span> <span class="n">layers</span><span class="o">.</span><span class="n">Resizing</span><span class="p">(</span><span class="n">IMAGE_SIZE</span><span class="p">,</span> <span class="n">IMAGE_SIZE</span><span class="p">),</span> <span class="p">],</span> <span class="n">name</span><span class="o">=</span><span class="s2">&quot;test_data_augmentation&quot;</span><span class="p">,</span> <span class="p">)</span> <span class="k">return</span> <span class="n">model</span> </code></pre></div> <hr /> <h2 id="a-layer-for-extracting-patches-from-images">A layer for extracting patches from images</h2> <p>This layer takes images as input and divides them into patches. The layer also includes two utility method:</p> <ul> <li><code>show_patched_image</code> &ndash; Takes a batch of images and its corresponding patches to plot a random pair of image and patches.</li> <li><code>reconstruct_from_patch</code> &ndash; Takes a single instance of patches and stitches them together into the original image.</li> </ul> <div class="codehilite"><pre><span></span><code><span class="k">class</span><span class="w"> </span><span class="nc">Patches</span><span class="p">(</span><span class="n">layers</span><span class="o">.</span><span class="n">Layer</span><span class="p">):</span> <span class="k">def</span><span class="w"> </span><span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">patch_size</span><span class="o">=</span><span class="n">PATCH_SIZE</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span> <span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span><span class="o">**</span><span class="n">kwargs</span><span class="p">)</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_size</span> <span class="o">=</span> <span class="n">patch_size</span> <span class="c1"># Assuming the image has three channels each patch would be</span> <span class="c1"># of size (patch_size, patch_size, 3).</span> <span class="bp">self</span><span class="o">.</span><span class="n">resize</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Reshape</span><span class="p">((</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="n">patch_size</span> <span class="o">*</span> <span class="n">patch_size</span> <span class="o">*</span> <span class="mi">3</span><span class="p">))</span> <span class="k">def</span><span class="w"> </span><span class="nf">call</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">images</span><span class="p">):</span> <span class="c1"># Create patches from the input images</span> <span class="n">patches</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">image</span><span class="o">.</span><span class="n">extract_patches</span><span class="p">(</span> <span class="n">images</span><span class="o">=</span><span class="n">images</span><span class="p">,</span> <span class="n">sizes</span><span class="o">=</span><span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_size</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_size</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span> <span class="n">strides</span><span class="o">=</span><span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_size</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_size</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span> <span class="n">rates</span><span class="o">=</span><span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span> <span class="n">padding</span><span class="o">=</span><span class="s2">&quot;VALID&quot;</span><span class="p">,</span> <span class="p">)</span> <span class="c1"># Reshape the patches to (batch, num_patches, patch_area) and return it.</span> <span class="n">patches</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">resize</span><span class="p">(</span><span class="n">patches</span><span class="p">)</span> <span class="k">return</span> <span class="n">patches</span> <span class="k">def</span><span class="w"> </span><span class="nf">show_patched_image</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">images</span><span class="p">,</span> <span class="n">patches</span><span class="p">):</span> <span class="c1"># This is a utility function which accepts a batch of images and its</span> <span class="c1"># corresponding patches and help visualize one image and its patches</span> <span class="c1"># side by side.</span> <span class="n">idx</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">choice</span><span class="p">(</span><span class="n">patches</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span> <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Index selected: </span><span class="si">{</span><span class="n">idx</span><span class="si">}</span><span class="s2">.&quot;</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">figure</span><span class="p">(</span><span class="n">figsize</span><span class="o">=</span><span class="p">(</span><span class="mi">4</span><span class="p">,</span> <span class="mi">4</span><span class="p">))</span> <span class="n">plt</span><span class="o">.</span><span class="n">imshow</span><span class="p">(</span><span class="n">keras</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">array_to_img</span><span class="p">(</span><span class="n">images</span><span class="p">[</span><span class="n">idx</span><span class="p">]))</span> <span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">(</span><span class="s2">&quot;off&quot;</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span> <span class="n">n</span> <span class="o">=</span> <span class="nb">int</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">sqrt</span><span class="p">(</span><span class="n">patches</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]))</span> <span class="n">plt</span><span class="o">.</span><span class="n">figure</span><span class="p">(</span><span class="n">figsize</span><span class="o">=</span><span class="p">(</span><span class="mi">4</span><span class="p">,</span> <span class="mi">4</span><span class="p">))</span> <span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">patch</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">patches</span><span class="p">[</span><span class="n">idx</span><span class="p">]):</span> <span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">subplot</span><span class="p">(</span><span class="n">n</span><span class="p">,</span> <span class="n">n</span><span class="p">,</span> <span class="n">i</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)</span> <span class="n">patch_img</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">patch</span><span class="p">,</span> <span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">patch_size</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_size</span><span class="p">,</span> <span class="mi">3</span><span class="p">))</span> <span class="n">plt</span><span class="o">.</span><span class="n">imshow</span><span class="p">(</span><span class="n">keras</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">img_to_array</span><span class="p">(</span><span class="n">patch_img</span><span class="p">))</span> <span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">(</span><span class="s2">&quot;off&quot;</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span> <span class="c1"># Return the index chosen to validate it outside the method.</span> <span class="k">return</span> <span class="n">idx</span> <span class="c1"># taken from https://stackoverflow.com/a/58082878/10319735</span> <span class="k">def</span><span class="w"> </span><span class="nf">reconstruct_from_patch</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">patch</span><span class="p">):</span> <span class="c1"># This utility function takes patches from a *single* image and</span> <span class="c1"># reconstructs it back into the image. This is useful for the train</span> <span class="c1"># monitor callback.</span> <span class="n">num_patches</span> <span class="o">=</span> <span class="n">patch</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="n">n</span> <span class="o">=</span> <span class="nb">int</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">sqrt</span><span class="p">(</span><span class="n">num_patches</span><span class="p">))</span> <span class="n">patch</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">patch</span><span class="p">,</span> <span class="p">(</span><span class="n">num_patches</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_size</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_size</span><span class="p">,</span> <span class="mi">3</span><span class="p">))</span> <span class="n">rows</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="n">patch</span><span class="p">,</span> <span class="n">n</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span> <span class="n">rows</span> <span class="o">=</span> <span class="p">[</span><span class="n">tf</span><span class="o">.</span><span class="n">concat</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">unstack</span><span class="p">(</span><span class="n">x</span><span class="p">),</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">rows</span><span class="p">]</span> <span class="n">reconstructed</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">concat</span><span class="p">(</span><span class="n">rows</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span> <span class="k">return</span> <span class="n">reconstructed</span> </code></pre></div> <p>Let's visualize the image patches.</p> <div class="codehilite"><pre><span></span><code><span class="c1"># Get a batch of images.</span> <span class="n">image_batch</span> <span class="o">=</span> <span class="nb">next</span><span class="p">(</span><span class="nb">iter</span><span class="p">(</span><span class="n">train_ds</span><span class="p">))</span> <span class="c1"># Augment the images.</span> <span class="n">augmentation_model</span> <span class="o">=</span> <span class="n">get_train_augmentation_model</span><span class="p">()</span> <span class="n">augmented_images</span> <span class="o">=</span> <span class="n">augmentation_model</span><span class="p">(</span><span class="n">image_batch</span><span class="p">)</span> <span class="c1"># Define the patch layer.</span> <span class="n">patch_layer</span> <span class="o">=</span> <span class="n">Patches</span><span class="p">()</span> <span class="c1"># Get the patches from the batched images.</span> <span class="n">patches</span> <span class="o">=</span> <span class="n">patch_layer</span><span class="p">(</span><span class="n">images</span><span class="o">=</span><span class="n">augmented_images</span><span class="p">)</span> <span class="c1"># Now pass the images and the corresponding patches</span> <span class="c1"># to the `show_patched_image` method.</span> <span class="n">random_index</span> <span class="o">=</span> <span class="n">patch_layer</span><span class="o">.</span><span class="n">show_patched_image</span><span class="p">(</span><span class="n">images</span><span class="o">=</span><span class="n">augmented_images</span><span class="p">,</span> <span class="n">patches</span><span class="o">=</span><span class="n">patches</span><span class="p">)</span> <span class="c1"># Chose the same chose image and try reconstructing the patches</span> <span class="c1"># into the original image.</span> <span class="n">image</span> <span class="o">=</span> <span class="n">patch_layer</span><span class="o">.</span><span class="n">reconstruct_from_patch</span><span class="p">(</span><span class="n">patches</span><span class="p">[</span><span class="n">random_index</span><span class="p">])</span> <span class="n">plt</span><span class="o">.</span><span class="n">imshow</span><span class="p">(</span><span class="n">image</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">(</span><span class="s2">&quot;off&quot;</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span> </code></pre></div> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code>Index selected: 102. </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_13_1.png" /></p> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_13_2.png" /></p> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_13_3.png" /></p> <hr /> <h2 id="patch-encoding-with-masking">Patch encoding with masking</h2> <p>Quoting the paper</p> <blockquote> <p>Following ViT, we divide an image into regular non-overlapping patches. Then we sample a subset of patches and mask (i.e., remove) the remaining ones. Our sampling strategy is straightforward: we sample random patches without replacement, following a uniform distribution. We simply refer to this as “random sampling”.</p> </blockquote> <p>This layer includes masking and encoding the patches.</p> <p>The utility methods of the layer are:</p> <ul> <li><code>get_random_indices</code> &ndash; Provides the mask and unmask indices.</li> <li><code>generate_masked_image</code> &ndash; Takes patches and unmask indices, results in a random masked image. This is an essential utility method for our training monitor callback (defined later).</li> </ul> <div class="codehilite"><pre><span></span><code><span class="k">class</span><span class="w"> </span><span class="nc">PatchEncoder</span><span class="p">(</span><span class="n">layers</span><span class="o">.</span><span class="n">Layer</span><span class="p">):</span> <span class="k">def</span><span class="w"> </span><span class="fm">__init__</span><span class="p">(</span> <span class="bp">self</span><span class="p">,</span> <span class="n">patch_size</span><span class="o">=</span><span class="n">PATCH_SIZE</span><span class="p">,</span> <span class="n">projection_dim</span><span class="o">=</span><span class="n">ENC_PROJECTION_DIM</span><span class="p">,</span> <span class="n">mask_proportion</span><span class="o">=</span><span class="n">MASK_PROPORTION</span><span class="p">,</span> <span class="n">downstream</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">,</span> <span class="p">):</span> <span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span><span class="o">**</span><span class="n">kwargs</span><span class="p">)</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_size</span> <span class="o">=</span> <span class="n">patch_size</span> <span class="bp">self</span><span class="o">.</span><span class="n">projection_dim</span> <span class="o">=</span> <span class="n">projection_dim</span> <span class="bp">self</span><span class="o">.</span><span class="n">mask_proportion</span> <span class="o">=</span> <span class="n">mask_proportion</span> <span class="bp">self</span><span class="o">.</span><span class="n">downstream</span> <span class="o">=</span> <span class="n">downstream</span> <span class="c1"># This is a trainable mask token initialized randomly from a normal</span> <span class="c1"># distribution.</span> <span class="bp">self</span><span class="o">.</span><span class="n">mask_token</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span> <span class="n">tf</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">normal</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span> <span class="n">patch_size</span> <span class="o">*</span> <span class="n">patch_size</span> <span class="o">*</span> <span class="mi">3</span><span class="p">]),</span> <span class="n">trainable</span><span class="o">=</span><span class="kc">True</span> <span class="p">)</span> <span class="k">def</span><span class="w"> </span><span class="nf">build</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">input_shape</span><span class="p">):</span> <span class="p">(</span><span class="n">_</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">num_patches</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_area</span><span class="p">)</span> <span class="o">=</span> <span class="n">input_shape</span> <span class="c1"># Create the projection layer for the patches.</span> <span class="bp">self</span><span class="o">.</span><span class="n">projection</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="n">units</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">projection_dim</span><span class="p">)</span> <span class="c1"># Create the positional embedding layer.</span> <span class="bp">self</span><span class="o">.</span><span class="n">position_embedding</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Embedding</span><span class="p">(</span> <span class="n">input_dim</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">num_patches</span><span class="p">,</span> <span class="n">output_dim</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">projection_dim</span> <span class="p">)</span> <span class="c1"># Number of patches that will be masked.</span> <span class="bp">self</span><span class="o">.</span><span class="n">num_mask</span> <span class="o">=</span> <span class="nb">int</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">mask_proportion</span> <span class="o">*</span> <span class="bp">self</span><span class="o">.</span><span class="n">num_patches</span><span class="p">)</span> <span class="k">def</span><span class="w"> </span><span class="nf">call</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">patches</span><span class="p">):</span> <span class="c1"># Get the positional embeddings.</span> <span class="n">batch_size</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">shape</span><span class="p">(</span><span class="n">patches</span><span class="p">)[</span><span class="mi">0</span><span class="p">]</span> <span class="n">positions</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">range</span><span class="p">(</span><span class="n">start</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">limit</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">num_patches</span><span class="p">,</span> <span class="n">delta</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> <span class="n">pos_embeddings</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">position_embedding</span><span class="p">(</span><span class="n">positions</span><span class="p">[</span><span class="n">tf</span><span class="o">.</span><span class="n">newaxis</span><span class="p">,</span> <span class="o">...</span><span class="p">])</span> <span class="n">pos_embeddings</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">tile</span><span class="p">(</span> <span class="n">pos_embeddings</span><span class="p">,</span> <span class="p">[</span><span class="n">batch_size</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">]</span> <span class="p">)</span> <span class="c1"># (B, num_patches, projection_dim)</span> <span class="c1"># Embed the patches.</span> <span class="n">patch_embeddings</span> <span class="o">=</span> <span class="p">(</span> <span class="bp">self</span><span class="o">.</span><span class="n">projection</span><span class="p">(</span><span class="n">patches</span><span class="p">)</span> <span class="o">+</span> <span class="n">pos_embeddings</span> <span class="p">)</span> <span class="c1"># (B, num_patches, projection_dim)</span> <span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">downstream</span><span class="p">:</span> <span class="k">return</span> <span class="n">patch_embeddings</span> <span class="k">else</span><span class="p">:</span> <span class="n">mask_indices</span><span class="p">,</span> <span class="n">unmask_indices</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">get_random_indices</span><span class="p">(</span><span class="n">batch_size</span><span class="p">)</span> <span class="c1"># The encoder input is the unmasked patch embeddings. Here we gather</span> <span class="c1"># all the patches that should be unmasked.</span> <span class="n">unmasked_embeddings</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">gather</span><span class="p">(</span> <span class="n">patch_embeddings</span><span class="p">,</span> <span class="n">unmask_indices</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">batch_dims</span><span class="o">=</span><span class="mi">1</span> <span class="p">)</span> <span class="c1"># (B, unmask_numbers, projection_dim)</span> <span class="c1"># Get the unmasked and masked position embeddings. We will need them</span> <span class="c1"># for the decoder.</span> <span class="n">unmasked_positions</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">gather</span><span class="p">(</span> <span class="n">pos_embeddings</span><span class="p">,</span> <span class="n">unmask_indices</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">batch_dims</span><span class="o">=</span><span class="mi">1</span> <span class="p">)</span> <span class="c1"># (B, unmask_numbers, projection_dim)</span> <span class="n">masked_positions</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">gather</span><span class="p">(</span> <span class="n">pos_embeddings</span><span class="p">,</span> <span class="n">mask_indices</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">batch_dims</span><span class="o">=</span><span class="mi">1</span> <span class="p">)</span> <span class="c1"># (B, mask_numbers, projection_dim)</span> <span class="c1"># Repeat the mask token number of mask times.</span> <span class="c1"># Mask tokens replace the masks of the image.</span> <span class="n">mask_tokens</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">repeat</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">mask_token</span><span class="p">,</span> <span class="n">repeats</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">num_mask</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span> <span class="n">mask_tokens</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">repeat</span><span class="p">(</span> <span class="n">mask_tokens</span><span class="p">[</span><span class="n">tf</span><span class="o">.</span><span class="n">newaxis</span><span class="p">,</span> <span class="o">...</span><span class="p">],</span> <span class="n">repeats</span><span class="o">=</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">0</span> <span class="p">)</span> <span class="c1"># Get the masked embeddings for the tokens.</span> <span class="n">masked_embeddings</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">projection</span><span class="p">(</span><span class="n">mask_tokens</span><span class="p">)</span> <span class="o">+</span> <span class="n">masked_positions</span> <span class="k">return</span> <span class="p">(</span> <span class="n">unmasked_embeddings</span><span class="p">,</span> <span class="c1"># Input to the encoder.</span> <span class="n">masked_embeddings</span><span class="p">,</span> <span class="c1"># First part of input to the decoder.</span> <span class="n">unmasked_positions</span><span class="p">,</span> <span class="c1"># Added to the encoder outputs.</span> <span class="n">mask_indices</span><span class="p">,</span> <span class="c1"># The indices that were masked.</span> <span class="n">unmask_indices</span><span class="p">,</span> <span class="c1"># The indices that were unmaksed.</span> <span class="p">)</span> <span class="k">def</span><span class="w"> </span><span class="nf">get_random_indices</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">batch_size</span><span class="p">):</span> <span class="c1"># Create random indices from a uniform distribution and then split</span> <span class="c1"># it into mask and unmask indices.</span> <span class="n">rand_indices</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">argsort</span><span class="p">(</span> <span class="n">tf</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="n">batch_size</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">num_patches</span><span class="p">)),</span> <span class="n">axis</span><span class="o">=-</span><span class="mi">1</span> <span class="p">)</span> <span class="n">mask_indices</span> <span class="o">=</span> <span class="n">rand_indices</span><span class="p">[:,</span> <span class="p">:</span> <span class="bp">self</span><span class="o">.</span><span class="n">num_mask</span><span class="p">]</span> <span class="n">unmask_indices</span> <span class="o">=</span> <span class="n">rand_indices</span><span class="p">[:,</span> <span class="bp">self</span><span class="o">.</span><span class="n">num_mask</span> <span class="p">:]</span> <span class="k">return</span> <span class="n">mask_indices</span><span class="p">,</span> <span class="n">unmask_indices</span> <span class="k">def</span><span class="w"> </span><span class="nf">generate_masked_image</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">patches</span><span class="p">,</span> <span class="n">unmask_indices</span><span class="p">):</span> <span class="c1"># Choose a random patch and it corresponding unmask index.</span> <span class="n">idx</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">choice</span><span class="p">(</span><span class="n">patches</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span> <span class="n">patch</span> <span class="o">=</span> <span class="n">patches</span><span class="p">[</span><span class="n">idx</span><span class="p">]</span> <span class="n">unmask_index</span> <span class="o">=</span> <span class="n">unmask_indices</span><span class="p">[</span><span class="n">idx</span><span class="p">]</span> <span class="c1"># Build a numpy array of same shape as patch.</span> <span class="n">new_patch</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros_like</span><span class="p">(</span><span class="n">patch</span><span class="p">)</span> <span class="c1"># Iterate of the new_patch and plug the unmasked patches.</span> <span class="n">count</span> <span class="o">=</span> <span class="mi">0</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">unmask_index</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]):</span> <span class="n">new_patch</span><span class="p">[</span><span class="n">unmask_index</span><span class="p">[</span><span class="n">i</span><span class="p">]]</span> <span class="o">=</span> <span class="n">patch</span><span class="p">[</span><span class="n">unmask_index</span><span class="p">[</span><span class="n">i</span><span class="p">]]</span> <span class="k">return</span> <span class="n">new_patch</span><span class="p">,</span> <span class="n">idx</span> </code></pre></div> <p>Let's see the masking process in action on a sample image.</p> <div class="codehilite"><pre><span></span><code><span class="c1"># Create the patch encoder layer.</span> <span class="n">patch_encoder</span> <span class="o">=</span> <span class="n">PatchEncoder</span><span class="p">()</span> <span class="c1"># Get the embeddings and positions.</span> <span class="p">(</span> <span class="n">unmasked_embeddings</span><span class="p">,</span> <span class="n">masked_embeddings</span><span class="p">,</span> <span class="n">unmasked_positions</span><span class="p">,</span> <span class="n">mask_indices</span><span class="p">,</span> <span class="n">unmask_indices</span><span class="p">,</span> <span class="p">)</span> <span class="o">=</span> <span class="n">patch_encoder</span><span class="p">(</span><span class="n">patches</span><span class="o">=</span><span class="n">patches</span><span class="p">)</span> <span class="c1"># Show a maksed patch image.</span> <span class="n">new_patch</span><span class="p">,</span> <span class="n">random_index</span> <span class="o">=</span> <span class="n">patch_encoder</span><span class="o">.</span><span class="n">generate_masked_image</span><span class="p">(</span><span class="n">patches</span><span class="p">,</span> <span class="n">unmask_indices</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">figure</span><span class="p">(</span><span class="n">figsize</span><span class="o">=</span><span class="p">(</span><span class="mi">10</span><span class="p">,</span> <span class="mi">10</span><span class="p">))</span> <span class="n">plt</span><span class="o">.</span><span class="n">subplot</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span> <span class="n">img</span> <span class="o">=</span> <span class="n">patch_layer</span><span class="o">.</span><span class="n">reconstruct_from_patch</span><span class="p">(</span><span class="n">new_patch</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">imshow</span><span class="p">(</span><span class="n">keras</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">array_to_img</span><span class="p">(</span><span class="n">img</span><span class="p">))</span> <span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">(</span><span class="s2">&quot;off&quot;</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="s2">&quot;Masked&quot;</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">subplot</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">)</span> <span class="n">img</span> <span class="o">=</span> <span class="n">augmented_images</span><span class="p">[</span><span class="n">random_index</span><span class="p">]</span> <span class="n">plt</span><span class="o">.</span><span class="n">imshow</span><span class="p">(</span><span class="n">keras</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">array_to_img</span><span class="p">(</span><span class="n">img</span><span class="p">))</span> <span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">(</span><span class="s2">&quot;off&quot;</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="s2">&quot;Original&quot;</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span> </code></pre></div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_17_0.png" /></p> <hr /> <h2 id="mlp">MLP</h2> <p>This serves as the fully connected feed forward network of the transformer architecture.</p> <div class="codehilite"><pre><span></span><code><span class="k">def</span><span class="w"> </span><span class="nf">mlp</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">dropout_rate</span><span class="p">,</span> <span class="n">hidden_units</span><span class="p">):</span> <span class="k">for</span> <span class="n">units</span> <span class="ow">in</span> <span class="n">hidden_units</span><span class="p">:</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="n">units</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">gelu</span><span class="p">)(</span><span class="n">x</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Dropout</span><span class="p">(</span><span class="n">dropout_rate</span><span class="p">)(</span><span class="n">x</span><span class="p">)</span> <span class="k">return</span> <span class="n">x</span> </code></pre></div> <hr /> <h2 id="mae-encoder">MAE encoder</h2> <p>The MAE encoder is ViT. The only point to note here is that the encoder outputs a layer normalized output.</p> <div class="codehilite"><pre><span></span><code><span class="k">def</span><span class="w"> </span><span class="nf">create_encoder</span><span class="p">(</span><span class="n">num_heads</span><span class="o">=</span><span class="n">ENC_NUM_HEADS</span><span class="p">,</span> <span class="n">num_layers</span><span class="o">=</span><span class="n">ENC_LAYERS</span><span class="p">):</span> <span class="n">inputs</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Input</span><span class="p">((</span><span class="kc">None</span><span class="p">,</span> <span class="n">ENC_PROJECTION_DIM</span><span class="p">))</span> <span class="n">x</span> <span class="o">=</span> <span class="n">inputs</span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_layers</span><span class="p">):</span> <span class="c1"># Layer normalization 1.</span> <span class="n">x1</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">LayerNormalization</span><span class="p">(</span><span class="n">epsilon</span><span class="o">=</span><span class="n">LAYER_NORM_EPS</span><span class="p">)(</span><span class="n">x</span><span class="p">)</span> <span class="c1"># Create a multi-head attention layer.</span> <span class="n">attention_output</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">MultiHeadAttention</span><span class="p">(</span> <span class="n">num_heads</span><span class="o">=</span><span class="n">num_heads</span><span class="p">,</span> <span class="n">key_dim</span><span class="o">=</span><span class="n">ENC_PROJECTION_DIM</span><span class="p">,</span> <span class="n">dropout</span><span class="o">=</span><span class="mf">0.1</span> <span class="p">)(</span><span class="n">x1</span><span class="p">,</span> <span class="n">x1</span><span class="p">)</span> <span class="c1"># Skip connection 1.</span> <span class="n">x2</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Add</span><span class="p">()([</span><span class="n">attention_output</span><span class="p">,</span> <span class="n">x</span><span class="p">])</span> <span class="c1"># Layer normalization 2.</span> <span class="n">x3</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">LayerNormalization</span><span class="p">(</span><span class="n">epsilon</span><span class="o">=</span><span class="n">LAYER_NORM_EPS</span><span class="p">)(</span><span class="n">x2</span><span class="p">)</span> <span class="c1"># MLP.</span> <span class="n">x3</span> <span class="o">=</span> <span class="n">mlp</span><span class="p">(</span><span class="n">x3</span><span class="p">,</span> <span class="n">hidden_units</span><span class="o">=</span><span class="n">ENC_TRANSFORMER_UNITS</span><span class="p">,</span> <span class="n">dropout_rate</span><span class="o">=</span><span class="mf">0.1</span><span class="p">)</span> <span class="c1"># Skip connection 2.</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Add</span><span class="p">()([</span><span class="n">x3</span><span class="p">,</span> <span class="n">x2</span><span class="p">])</span> <span class="n">outputs</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">LayerNormalization</span><span class="p">(</span><span class="n">epsilon</span><span class="o">=</span><span class="n">LAYER_NORM_EPS</span><span class="p">)(</span><span class="n">x</span><span class="p">)</span> <span class="k">return</span> <span class="n">keras</span><span class="o">.</span><span class="n">Model</span><span class="p">(</span><span class="n">inputs</span><span class="p">,</span> <span class="n">outputs</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s2">&quot;mae_encoder&quot;</span><span class="p">)</span> </code></pre></div> <hr /> <h2 id="mae-decoder">MAE decoder</h2> <p>The authors point out that they use an <strong>asymmetric</strong> autoencoder model. They use a lightweight decoder that takes "&lt;10% computation per token vs. the encoder". We are not specific with the "&lt;10% computation" in our implementation but have used a smaller decoder (both in terms of depth and projection dimensions).</p> <div class="codehilite"><pre><span></span><code><span class="k">def</span><span class="w"> </span><span class="nf">create_decoder</span><span class="p">(</span> <span class="n">num_layers</span><span class="o">=</span><span class="n">DEC_LAYERS</span><span class="p">,</span> <span class="n">num_heads</span><span class="o">=</span><span class="n">DEC_NUM_HEADS</span><span class="p">,</span> <span class="n">image_size</span><span class="o">=</span><span class="n">IMAGE_SIZE</span> <span class="p">):</span> <span class="n">inputs</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Input</span><span class="p">((</span><span class="n">NUM_PATCHES</span><span class="p">,</span> <span class="n">ENC_PROJECTION_DIM</span><span class="p">))</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="n">DEC_PROJECTION_DIM</span><span class="p">)(</span><span class="n">inputs</span><span class="p">)</span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_layers</span><span class="p">):</span> <span class="c1"># Layer normalization 1.</span> <span class="n">x1</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">LayerNormalization</span><span class="p">(</span><span class="n">epsilon</span><span class="o">=</span><span class="n">LAYER_NORM_EPS</span><span class="p">)(</span><span class="n">x</span><span class="p">)</span> <span class="c1"># Create a multi-head attention layer.</span> <span class="n">attention_output</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">MultiHeadAttention</span><span class="p">(</span> <span class="n">num_heads</span><span class="o">=</span><span class="n">num_heads</span><span class="p">,</span> <span class="n">key_dim</span><span class="o">=</span><span class="n">DEC_PROJECTION_DIM</span><span class="p">,</span> <span class="n">dropout</span><span class="o">=</span><span class="mf">0.1</span> <span class="p">)(</span><span class="n">x1</span><span class="p">,</span> <span class="n">x1</span><span class="p">)</span> <span class="c1"># Skip connection 1.</span> <span class="n">x2</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Add</span><span class="p">()([</span><span class="n">attention_output</span><span class="p">,</span> <span class="n">x</span><span class="p">])</span> <span class="c1"># Layer normalization 2.</span> <span class="n">x3</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">LayerNormalization</span><span class="p">(</span><span class="n">epsilon</span><span class="o">=</span><span class="n">LAYER_NORM_EPS</span><span class="p">)(</span><span class="n">x2</span><span class="p">)</span> <span class="c1"># MLP.</span> <span class="n">x3</span> <span class="o">=</span> <span class="n">mlp</span><span class="p">(</span><span class="n">x3</span><span class="p">,</span> <span class="n">hidden_units</span><span class="o">=</span><span class="n">DEC_TRANSFORMER_UNITS</span><span class="p">,</span> <span class="n">dropout_rate</span><span class="o">=</span><span class="mf">0.1</span><span class="p">)</span> <span class="c1"># Skip connection 2.</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Add</span><span class="p">()([</span><span class="n">x3</span><span class="p">,</span> <span class="n">x2</span><span class="p">])</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">LayerNormalization</span><span class="p">(</span><span class="n">epsilon</span><span class="o">=</span><span class="n">LAYER_NORM_EPS</span><span class="p">)(</span><span class="n">x</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Flatten</span><span class="p">()(</span><span class="n">x</span><span class="p">)</span> <span class="n">pre_final</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="n">units</span><span class="o">=</span><span class="n">image_size</span> <span class="o">*</span> <span class="n">image_size</span> <span class="o">*</span> <span class="mi">3</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s2">&quot;sigmoid&quot;</span><span class="p">)(</span><span class="n">x</span><span class="p">)</span> <span class="n">outputs</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Reshape</span><span class="p">((</span><span class="n">image_size</span><span class="p">,</span> <span class="n">image_size</span><span class="p">,</span> <span class="mi">3</span><span class="p">))(</span><span class="n">pre_final</span><span class="p">)</span> <span class="k">return</span> <span class="n">keras</span><span class="o">.</span><span class="n">Model</span><span class="p">(</span><span class="n">inputs</span><span class="p">,</span> <span class="n">outputs</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s2">&quot;mae_decoder&quot;</span><span class="p">)</span> </code></pre></div> <hr /> <h2 id="mae-trainer">MAE trainer</h2> <p>This is the trainer module. We wrap the encoder and decoder inside of a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model"><code>tf.keras.Model</code></a> subclass. This allows us to customize what happens in the <code>model.fit()</code> loop.</p> <div class="codehilite"><pre><span></span><code><span class="k">class</span><span class="w"> </span><span class="nc">MaskedAutoencoder</span><span class="p">(</span><span class="n">keras</span><span class="o">.</span><span class="n">Model</span><span class="p">):</span> <span class="k">def</span><span class="w"> </span><span class="fm">__init__</span><span class="p">(</span> <span class="bp">self</span><span class="p">,</span> <span class="n">train_augmentation_model</span><span class="p">,</span> <span class="n">test_augmentation_model</span><span class="p">,</span> <span class="n">patch_layer</span><span class="p">,</span> <span class="n">patch_encoder</span><span class="p">,</span> <span class="n">encoder</span><span class="p">,</span> <span class="n">decoder</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">,</span> <span class="p">):</span> <span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span><span class="o">**</span><span class="n">kwargs</span><span class="p">)</span> <span class="bp">self</span><span class="o">.</span><span class="n">train_augmentation_model</span> <span class="o">=</span> <span class="n">train_augmentation_model</span> <span class="bp">self</span><span class="o">.</span><span class="n">test_augmentation_model</span> <span class="o">=</span> <span class="n">test_augmentation_model</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_layer</span> <span class="o">=</span> <span class="n">patch_layer</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_encoder</span> <span class="o">=</span> <span class="n">patch_encoder</span> <span class="bp">self</span><span class="o">.</span><span class="n">encoder</span> <span class="o">=</span> <span class="n">encoder</span> <span class="bp">self</span><span class="o">.</span><span class="n">decoder</span> <span class="o">=</span> <span class="n">decoder</span> <span class="k">def</span><span class="w"> </span><span class="nf">calculate_loss</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">images</span><span class="p">,</span> <span class="n">test</span><span class="o">=</span><span class="kc">False</span><span class="p">):</span> <span class="c1"># Augment the input images.</span> <span class="k">if</span> <span class="n">test</span><span class="p">:</span> <span class="n">augmented_images</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">test_augmentation_model</span><span class="p">(</span><span class="n">images</span><span class="p">)</span> <span class="k">else</span><span class="p">:</span> <span class="n">augmented_images</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">train_augmentation_model</span><span class="p">(</span><span class="n">images</span><span class="p">)</span> <span class="c1"># Patch the augmented images.</span> <span class="n">patches</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_layer</span><span class="p">(</span><span class="n">augmented_images</span><span class="p">)</span> <span class="c1"># Encode the patches.</span> <span class="p">(</span> <span class="n">unmasked_embeddings</span><span class="p">,</span> <span class="n">masked_embeddings</span><span class="p">,</span> <span class="n">unmasked_positions</span><span class="p">,</span> <span class="n">mask_indices</span><span class="p">,</span> <span class="n">unmask_indices</span><span class="p">,</span> <span class="p">)</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_encoder</span><span class="p">(</span><span class="n">patches</span><span class="p">)</span> <span class="c1"># Pass the unmaksed patche to the encoder.</span> <span class="n">encoder_outputs</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">encoder</span><span class="p">(</span><span class="n">unmasked_embeddings</span><span class="p">)</span> <span class="c1"># Create the decoder inputs.</span> <span class="n">encoder_outputs</span> <span class="o">=</span> <span class="n">encoder_outputs</span> <span class="o">+</span> <span class="n">unmasked_positions</span> <span class="n">decoder_inputs</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">concat</span><span class="p">([</span><span class="n">encoder_outputs</span><span class="p">,</span> <span class="n">masked_embeddings</span><span class="p">],</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> <span class="c1"># Decode the inputs.</span> <span class="n">decoder_outputs</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">decoder</span><span class="p">(</span><span class="n">decoder_inputs</span><span class="p">)</span> <span class="n">decoder_patches</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_layer</span><span class="p">(</span><span class="n">decoder_outputs</span><span class="p">)</span> <span class="n">loss_patch</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">gather</span><span class="p">(</span><span class="n">patches</span><span class="p">,</span> <span class="n">mask_indices</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">batch_dims</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> <span class="n">loss_output</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">gather</span><span class="p">(</span><span class="n">decoder_patches</span><span class="p">,</span> <span class="n">mask_indices</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">batch_dims</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> <span class="c1"># Compute the total loss.</span> <span class="n">total_loss</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">compute_loss</span><span class="p">(</span><span class="n">y</span><span class="o">=</span><span class="n">loss_patch</span><span class="p">,</span> <span class="n">y_pred</span><span class="o">=</span><span class="n">loss_output</span><span class="p">)</span> <span class="k">return</span> <span class="n">total_loss</span><span class="p">,</span> <span class="n">loss_patch</span><span class="p">,</span> <span class="n">loss_output</span> <span class="k">def</span><span class="w"> </span><span class="nf">train_step</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">images</span><span class="p">):</span> <span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">GradientTape</span><span class="p">()</span> <span class="k">as</span> <span class="n">tape</span><span class="p">:</span> <span class="n">total_loss</span><span class="p">,</span> <span class="n">loss_patch</span><span class="p">,</span> <span class="n">loss_output</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">calculate_loss</span><span class="p">(</span><span class="n">images</span><span class="p">)</span> <span class="c1"># Apply gradients.</span> <span class="n">train_vars</span> <span class="o">=</span> <span class="p">[</span> <span class="bp">self</span><span class="o">.</span><span class="n">train_augmentation_model</span><span class="o">.</span><span class="n">trainable_variables</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_layer</span><span class="o">.</span><span class="n">trainable_variables</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">patch_encoder</span><span class="o">.</span><span class="n">trainable_variables</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">encoder</span><span class="o">.</span><span class="n">trainable_variables</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">decoder</span><span class="o">.</span><span class="n">trainable_variables</span><span class="p">,</span> <span class="p">]</span> <span class="n">grads</span> <span class="o">=</span> <span class="n">tape</span><span class="o">.</span><span class="n">gradient</span><span class="p">(</span><span class="n">total_loss</span><span class="p">,</span> <span class="n">train_vars</span><span class="p">)</span> <span class="n">tv_list</span> <span class="o">=</span> <span class="p">[]</span> <span class="k">for</span> <span class="n">grad</span><span class="p">,</span> <span class="n">var</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">grads</span><span class="p">,</span> <span class="n">train_vars</span><span class="p">):</span> <span class="k">for</span> <span class="n">g</span><span class="p">,</span> <span class="n">v</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">grad</span><span class="p">,</span> <span class="n">var</span><span class="p">):</span> <span class="n">tv_list</span><span class="o">.</span><span class="n">append</span><span class="p">((</span><span class="n">g</span><span class="p">,</span> <span class="n">v</span><span class="p">))</span> <span class="bp">self</span><span class="o">.</span><span class="n">optimizer</span><span class="o">.</span><span class="n">apply_gradients</span><span class="p">(</span><span class="n">tv_list</span><span class="p">)</span> <span class="c1"># Report progress.</span> <span class="n">results</span> <span class="o">=</span> <span class="p">{}</span> <span class="k">for</span> <span class="n">metric</span> <span class="ow">in</span> <span class="bp">self</span><span class="o">.</span><span class="n">metrics</span><span class="p">:</span> <span class="n">metric</span><span class="o">.</span><span class="n">update_state</span><span class="p">(</span><span class="n">loss_patch</span><span class="p">,</span> <span class="n">loss_output</span><span class="p">)</span> <span class="n">results</span><span class="p">[</span><span class="n">metric</span><span class="o">.</span><span class="n">name</span><span class="p">]</span> <span class="o">=</span> <span class="n">metric</span><span class="o">.</span><span class="n">result</span><span class="p">()</span> <span class="k">return</span> <span class="n">results</span> <span class="k">def</span><span class="w"> </span><span class="nf">test_step</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">images</span><span class="p">):</span> <span class="n">total_loss</span><span class="p">,</span> <span class="n">loss_patch</span><span class="p">,</span> <span class="n">loss_output</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">calculate_loss</span><span class="p">(</span><span class="n">images</span><span class="p">,</span> <span class="n">test</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> <span class="c1"># Update the trackers.</span> <span class="n">results</span> <span class="o">=</span> <span class="p">{}</span> <span class="k">for</span> <span class="n">metric</span> <span class="ow">in</span> <span class="bp">self</span><span class="o">.</span><span class="n">metrics</span><span class="p">:</span> <span class="n">metric</span><span class="o">.</span><span class="n">update_state</span><span class="p">(</span><span class="n">loss_patch</span><span class="p">,</span> <span class="n">loss_output</span><span class="p">)</span> <span class="n">results</span><span class="p">[</span><span class="n">metric</span><span class="o">.</span><span class="n">name</span><span class="p">]</span> <span class="o">=</span> <span class="n">metric</span><span class="o">.</span><span class="n">result</span><span class="p">()</span> <span class="k">return</span> <span class="n">results</span> </code></pre></div> <hr /> <h2 id="model-initialization">Model initialization</h2> <div class="codehilite"><pre><span></span><code><span class="n">train_augmentation_model</span> <span class="o">=</span> <span class="n">get_train_augmentation_model</span><span class="p">()</span> <span class="n">test_augmentation_model</span> <span class="o">=</span> <span class="n">get_test_augmentation_model</span><span class="p">()</span> <span class="n">patch_layer</span> <span class="o">=</span> <span class="n">Patches</span><span class="p">()</span> <span class="n">patch_encoder</span> <span class="o">=</span> <span class="n">PatchEncoder</span><span class="p">()</span> <span class="n">encoder</span> <span class="o">=</span> <span class="n">create_encoder</span><span class="p">()</span> <span class="n">decoder</span> <span class="o">=</span> <span class="n">create_decoder</span><span class="p">()</span> <span class="n">mae_model</span> <span class="o">=</span> <span class="n">MaskedAutoencoder</span><span class="p">(</span> <span class="n">train_augmentation_model</span><span class="o">=</span><span class="n">train_augmentation_model</span><span class="p">,</span> <span class="n">test_augmentation_model</span><span class="o">=</span><span class="n">test_augmentation_model</span><span class="p">,</span> <span class="n">patch_layer</span><span class="o">=</span><span class="n">patch_layer</span><span class="p">,</span> <span class="n">patch_encoder</span><span class="o">=</span><span class="n">patch_encoder</span><span class="p">,</span> <span class="n">encoder</span><span class="o">=</span><span class="n">encoder</span><span class="p">,</span> <span class="n">decoder</span><span class="o">=</span><span class="n">decoder</span><span class="p">,</span> <span class="p">)</span> </code></pre></div> <hr /> <h2 id="training-callbacks">Training callbacks</h2> <h3 id="visualization-callback">Visualization callback</h3> <div class="codehilite"><pre><span></span><code><span class="c1"># Taking a batch of test inputs to measure model&#39;s progress.</span> <span class="n">test_images</span> <span class="o">=</span> <span class="nb">next</span><span class="p">(</span><span class="nb">iter</span><span class="p">(</span><span class="n">test_ds</span><span class="p">))</span> <span class="k">class</span><span class="w"> </span><span class="nc">TrainMonitor</span><span class="p">(</span><span class="n">keras</span><span class="o">.</span><span class="n">callbacks</span><span class="o">.</span><span class="n">Callback</span><span class="p">):</span> <span class="k">def</span><span class="w"> </span><span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">epoch_interval</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span> <span class="bp">self</span><span class="o">.</span><span class="n">epoch_interval</span> <span class="o">=</span> <span class="n">epoch_interval</span> <span class="k">def</span><span class="w"> </span><span class="nf">on_epoch_end</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">epoch</span><span class="p">,</span> <span class="n">logs</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span> <span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">epoch_interval</span> <span class="ow">and</span> <span class="n">epoch</span> <span class="o">%</span> <span class="bp">self</span><span class="o">.</span><span class="n">epoch_interval</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span> <span class="n">test_augmented_images</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">test_augmentation_model</span><span class="p">(</span><span class="n">test_images</span><span class="p">)</span> <span class="n">test_patches</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">patch_layer</span><span class="p">(</span><span class="n">test_augmented_images</span><span class="p">)</span> <span class="p">(</span> <span class="n">test_unmasked_embeddings</span><span class="p">,</span> <span class="n">test_masked_embeddings</span><span class="p">,</span> <span class="n">test_unmasked_positions</span><span class="p">,</span> <span class="n">test_mask_indices</span><span class="p">,</span> <span class="n">test_unmask_indices</span><span class="p">,</span> <span class="p">)</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">patch_encoder</span><span class="p">(</span><span class="n">test_patches</span><span class="p">)</span> <span class="n">test_encoder_outputs</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">encoder</span><span class="p">(</span><span class="n">test_unmasked_embeddings</span><span class="p">)</span> <span class="n">test_encoder_outputs</span> <span class="o">=</span> <span class="n">test_encoder_outputs</span> <span class="o">+</span> <span class="n">test_unmasked_positions</span> <span class="n">test_decoder_inputs</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">concat</span><span class="p">(</span> <span class="p">[</span><span class="n">test_encoder_outputs</span><span class="p">,</span> <span class="n">test_masked_embeddings</span><span class="p">],</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span> <span class="p">)</span> <span class="n">test_decoder_outputs</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">decoder</span><span class="p">(</span><span class="n">test_decoder_inputs</span><span class="p">)</span> <span class="c1"># Show a maksed patch image.</span> <span class="n">test_masked_patch</span><span class="p">,</span> <span class="n">idx</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">patch_encoder</span><span class="o">.</span><span class="n">generate_masked_image</span><span class="p">(</span> <span class="n">test_patches</span><span class="p">,</span> <span class="n">test_unmask_indices</span> <span class="p">)</span> <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;</span><span class="se">\n</span><span class="s2">Idx chosen: </span><span class="si">{</span><span class="n">idx</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span> <span class="n">original_image</span> <span class="o">=</span> <span class="n">test_augmented_images</span><span class="p">[</span><span class="n">idx</span><span class="p">]</span> <span class="n">masked_image</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">patch_layer</span><span class="o">.</span><span class="n">reconstruct_from_patch</span><span class="p">(</span> <span class="n">test_masked_patch</span> <span class="p">)</span> <span class="n">reconstructed_image</span> <span class="o">=</span> <span class="n">test_decoder_outputs</span><span class="p">[</span><span class="n">idx</span><span class="p">]</span> <span class="n">fig</span><span class="p">,</span> <span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">subplots</span><span class="p">(</span><span class="n">nrows</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">ncols</span><span class="o">=</span><span class="mi">3</span><span class="p">,</span> <span class="n">figsize</span><span class="o">=</span><span class="p">(</span><span class="mi">15</span><span class="p">,</span> <span class="mi">5</span><span class="p">))</span> <span class="n">ax</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">imshow</span><span class="p">(</span><span class="n">original_image</span><span class="p">)</span> <span class="n">ax</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">set_title</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Original: </span><span class="si">{</span><span class="n">epoch</span><span class="si">:</span><span class="s2">03d</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span> <span class="n">ax</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">.</span><span class="n">imshow</span><span class="p">(</span><span class="n">masked_image</span><span class="p">)</span> <span class="n">ax</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span><span class="o">.</span><span class="n">set_title</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Masked: </span><span class="si">{</span><span class="n">epoch</span><span class="si">:</span><span class="s2">03d</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span> <span class="n">ax</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span><span class="o">.</span><span class="n">imshow</span><span class="p">(</span><span class="n">reconstructed_image</span><span class="p">)</span> <span class="n">ax</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span><span class="o">.</span><span class="n">set_title</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Resonstructed: </span><span class="si">{</span><span class="n">epoch</span><span class="si">:</span><span class="s2">03d</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span> <span class="n">plt</span><span class="o">.</span><span class="n">close</span><span class="p">()</span> </code></pre></div> <h3 id="learning-rate-scheduler">Learning rate scheduler</h3> <div class="codehilite"><pre><span></span><code><span class="c1"># Some code is taken from:</span> <span class="c1"># https://www.kaggle.com/ashusma/training-rfcx-tensorflow-tpu-effnet-b2.</span> <span class="k">class</span><span class="w"> </span><span class="nc">WarmUpCosine</span><span class="p">(</span><span class="n">keras</span><span class="o">.</span><span class="n">optimizers</span><span class="o">.</span><span class="n">schedules</span><span class="o">.</span><span class="n">LearningRateSchedule</span><span class="p">):</span> <span class="k">def</span><span class="w"> </span><span class="fm">__init__</span><span class="p">(</span> <span class="bp">self</span><span class="p">,</span> <span class="n">learning_rate_base</span><span class="p">,</span> <span class="n">total_steps</span><span class="p">,</span> <span class="n">warmup_learning_rate</span><span class="p">,</span> <span class="n">warmup_steps</span> <span class="p">):</span> <span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span> <span class="bp">self</span><span class="o">.</span><span class="n">learning_rate_base</span> <span class="o">=</span> <span class="n">learning_rate_base</span> <span class="bp">self</span><span class="o">.</span><span class="n">total_steps</span> <span class="o">=</span> <span class="n">total_steps</span> <span class="bp">self</span><span class="o">.</span><span class="n">warmup_learning_rate</span> <span class="o">=</span> <span class="n">warmup_learning_rate</span> <span class="bp">self</span><span class="o">.</span><span class="n">warmup_steps</span> <span class="o">=</span> <span class="n">warmup_steps</span> <span class="bp">self</span><span class="o">.</span><span class="n">pi</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">pi</span><span class="p">)</span> <span class="k">def</span><span class="w"> </span><span class="fm">__call__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">step</span><span class="p">):</span> <span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">total_steps</span> <span class="o">&lt;</span> <span class="bp">self</span><span class="o">.</span><span class="n">warmup_steps</span><span class="p">:</span> <span class="k">raise</span> <span class="ne">ValueError</span><span class="p">(</span><span class="s2">&quot;Total_steps must be larger or equal to warmup_steps.&quot;</span><span class="p">)</span> <span class="n">cos_annealed_lr</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">cos</span><span class="p">(</span> <span class="bp">self</span><span class="o">.</span><span class="n">pi</span> <span class="o">*</span> <span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">cast</span><span class="p">(</span><span class="n">step</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span> <span class="o">-</span> <span class="bp">self</span><span class="o">.</span><span class="n">warmup_steps</span><span class="p">)</span> <span class="o">/</span> <span class="nb">float</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">total_steps</span> <span class="o">-</span> <span class="bp">self</span><span class="o">.</span><span class="n">warmup_steps</span><span class="p">)</span> <span class="p">)</span> <span class="n">learning_rate</span> <span class="o">=</span> <span class="mf">0.5</span> <span class="o">*</span> <span class="bp">self</span><span class="o">.</span><span class="n">learning_rate_base</span> <span class="o">*</span> <span class="p">(</span><span class="mi">1</span> <span class="o">+</span> <span class="n">cos_annealed_lr</span><span class="p">)</span> <span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">warmup_steps</span> <span class="o">&gt;</span> <span class="mi">0</span><span class="p">:</span> <span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">learning_rate_base</span> <span class="o">&lt;</span> <span class="bp">self</span><span class="o">.</span><span class="n">warmup_learning_rate</span><span class="p">:</span> <span class="k">raise</span> <span class="ne">ValueError</span><span class="p">(</span> <span class="s2">&quot;Learning_rate_base must be larger or equal to &quot;</span> <span class="s2">&quot;warmup_learning_rate.&quot;</span> <span class="p">)</span> <span class="n">slope</span> <span class="o">=</span> <span class="p">(</span> <span class="bp">self</span><span class="o">.</span><span class="n">learning_rate_base</span> <span class="o">-</span> <span class="bp">self</span><span class="o">.</span><span class="n">warmup_learning_rate</span> <span class="p">)</span> <span class="o">/</span> <span class="bp">self</span><span class="o">.</span><span class="n">warmup_steps</span> <span class="n">warmup_rate</span> <span class="o">=</span> <span class="n">slope</span> <span class="o">*</span> <span class="n">tf</span><span class="o">.</span><span class="n">cast</span><span class="p">(</span><span class="n">step</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span> <span class="o">+</span> <span class="bp">self</span><span class="o">.</span><span class="n">warmup_learning_rate</span> <span class="n">learning_rate</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">where</span><span class="p">(</span> <span class="n">step</span> <span class="o">&lt;</span> <span class="bp">self</span><span class="o">.</span><span class="n">warmup_steps</span><span class="p">,</span> <span class="n">warmup_rate</span><span class="p">,</span> <span class="n">learning_rate</span> <span class="p">)</span> <span class="k">return</span> <span class="n">tf</span><span class="o">.</span><span class="n">where</span><span class="p">(</span> <span class="n">step</span> <span class="o">&gt;</span> <span class="bp">self</span><span class="o">.</span><span class="n">total_steps</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">,</span> <span class="n">learning_rate</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s2">&quot;learning_rate&quot;</span> <span class="p">)</span> <span class="n">total_steps</span> <span class="o">=</span> <span class="nb">int</span><span class="p">((</span><span class="nb">len</span><span class="p">(</span><span class="n">x_train</span><span class="p">)</span> <span class="o">/</span> <span class="n">BATCH_SIZE</span><span class="p">)</span> <span class="o">*</span> <span class="n">EPOCHS</span><span class="p">)</span> <span class="n">warmup_epoch_percentage</span> <span class="o">=</span> <span class="mf">0.15</span> <span class="n">warmup_steps</span> <span class="o">=</span> <span class="nb">int</span><span class="p">(</span><span class="n">total_steps</span> <span class="o">*</span> <span class="n">warmup_epoch_percentage</span><span class="p">)</span> <span class="n">scheduled_lrs</span> <span class="o">=</span> <span class="n">WarmUpCosine</span><span class="p">(</span> <span class="n">learning_rate_base</span><span class="o">=</span><span class="n">LEARNING_RATE</span><span class="p">,</span> <span class="n">total_steps</span><span class="o">=</span><span class="n">total_steps</span><span class="p">,</span> <span class="n">warmup_learning_rate</span><span class="o">=</span><span class="mf">0.0</span><span class="p">,</span> <span class="n">warmup_steps</span><span class="o">=</span><span class="n">warmup_steps</span><span class="p">,</span> <span class="p">)</span> <span class="n">lrs</span> <span class="o">=</span> <span class="p">[</span><span class="n">scheduled_lrs</span><span class="p">(</span><span class="n">step</span><span class="p">)</span> <span class="k">for</span> <span class="n">step</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">total_steps</span><span class="p">)]</span> <span class="n">plt</span><span class="o">.</span><span class="n">plot</span><span class="p">(</span><span class="n">lrs</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s2">&quot;Step&quot;</span><span class="p">,</span> <span class="n">fontsize</span><span class="o">=</span><span class="mi">14</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">ylabel</span><span class="p">(</span><span class="s2">&quot;LR&quot;</span><span class="p">,</span> <span class="n">fontsize</span><span class="o">=</span><span class="mi">14</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span> <span class="c1"># Assemble the callbacks.</span> <span class="n">train_callbacks</span> <span class="o">=</span> <span class="p">[</span><span class="n">TrainMonitor</span><span class="p">(</span><span class="n">epoch_interval</span><span class="o">=</span><span class="mi">5</span><span class="p">)]</span> </code></pre></div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_32_0.png" /></p> <hr /> <h2 id="model-compilation-and-training">Model compilation and training</h2> <div class="codehilite"><pre><span></span><code><span class="n">optimizer</span> <span class="o">=</span> <span class="n">keras</span><span class="o">.</span><span class="n">optimizers</span><span class="o">.</span><span class="n">AdamW</span><span class="p">(</span> <span class="n">learning_rate</span><span class="o">=</span><span class="n">scheduled_lrs</span><span class="p">,</span> <span class="n">weight_decay</span><span class="o">=</span><span class="n">WEIGHT_DECAY</span> <span class="p">)</span> <span class="c1"># Compile and pretrain the model.</span> <span class="n">mae_model</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span> <span class="n">optimizer</span><span class="o">=</span><span class="n">optimizer</span><span class="p">,</span> <span class="n">loss</span><span class="o">=</span><span class="n">keras</span><span class="o">.</span><span class="n">losses</span><span class="o">.</span><span class="n">MeanSquaredError</span><span class="p">(),</span> <span class="n">metrics</span><span class="o">=</span><span class="p">[</span><span class="s2">&quot;mae&quot;</span><span class="p">]</span> <span class="p">)</span> <span class="n">history</span> <span class="o">=</span> <span class="n">mae_model</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span> <span class="n">train_ds</span><span class="p">,</span> <span class="n">epochs</span><span class="o">=</span><span class="n">EPOCHS</span><span class="p">,</span> <span class="n">validation_data</span><span class="o">=</span><span class="n">val_ds</span><span class="p">,</span> <span class="n">callbacks</span><span class="o">=</span><span class="n">train_callbacks</span><span class="p">,</span> <span class="p">)</span> <span class="c1"># Measure its performance.</span> <span class="n">loss</span><span class="p">,</span> <span class="n">mae</span> <span class="o">=</span> <span class="n">mae_model</span><span class="o">.</span><span class="n">evaluate</span><span class="p">(</span><span class="n">test_ds</span><span class="p">)</span> <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Loss: </span><span class="si">{</span><span class="n">loss</span><span class="si">:</span><span class="s2">.2f</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span> <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;MAE: </span><span class="si">{</span><span class="n">mae</span><span class="si">:</span><span class="s2">.2f</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span> </code></pre></div> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code>Epoch 1/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 80ms/step - mae: 0.2035 - loss: 0.4828 Idx chosen: 92 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_1.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 47s 95ms/step - mae: 0.2033 - loss: 0.4828 - val_loss: 0.5225 - val_mae: 0.1600 Epoch 2/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 83ms/step - mae: 0.1592 - loss: 0.5128 - val_loss: 0.5290 - val_mae: 0.1511 Epoch 3/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1530 - loss: 0.5193 - val_loss: 0.5336 - val_mae: 0.1478 Epoch 4/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1502 - loss: 0.5220 - val_loss: 0.5298 - val_mae: 0.1436 Epoch 5/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1458 - loss: 0.5245 - val_loss: 0.5296 - val_mae: 0.1405 Epoch 6/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 81ms/step - mae: 0.1414 - loss: 0.5265 Idx chosen: 14 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_3.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 14s 88ms/step - mae: 0.1414 - loss: 0.5265 - val_loss: 0.5328 - val_mae: 0.1402 Epoch 7/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1399 - loss: 0.5278 - val_loss: 0.5361 - val_mae: 0.1360 Epoch 8/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1389 - loss: 0.5285 - val_loss: 0.5365 - val_mae: 0.1424 Epoch 9/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1379 - loss: 0.5295 - val_loss: 0.5312 - val_mae: 0.1345 Epoch 10/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1352 - loss: 0.5308 - val_loss: 0.5374 - val_mae: 0.1321 Epoch 11/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 81ms/step - mae: 0.1339 - loss: 0.5317 Idx chosen: 106 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_5.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 14s 87ms/step - mae: 0.1339 - loss: 0.5317 - val_loss: 0.5392 - val_mae: 0.1330 Epoch 12/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1321 - loss: 0.5331 - val_loss: 0.5383 - val_mae: 0.1301 Epoch 13/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1317 - loss: 0.5343 - val_loss: 0.5405 - val_mae: 0.1322 Epoch 14/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1326 - loss: 0.5338 - val_loss: 0.5404 - val_mae: 0.1280 Epoch 15/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 84ms/step - mae: 0.1297 - loss: 0.5343 - val_loss: 0.5444 - val_mae: 0.1261 Epoch 16/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 82ms/step - mae: 0.1276 - loss: 0.5361 Idx chosen: 71 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_7.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 14s 91ms/step - mae: 0.1276 - loss: 0.5362 - val_loss: 0.5456 - val_mae: 0.1243 Epoch 17/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 83ms/step - mae: 0.1262 - loss: 0.5382 - val_loss: 0.5427 - val_mae: 0.1233 Epoch 18/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1221 - loss: 0.5407 - val_loss: 0.5473 - val_mae: 0.1196 Epoch 19/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1209 - loss: 0.5412 - val_loss: 0.5511 - val_mae: 0.1176 Epoch 20/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 83ms/step - mae: 0.1202 - loss: 0.5422 - val_loss: 0.5515 - val_mae: 0.1167 Epoch 21/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 79ms/step - mae: 0.1186 - loss: 0.5430 Idx chosen: 188 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_9.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 85ms/step - mae: 0.1186 - loss: 0.5430 - val_loss: 0.5546 - val_mae: 0.1168 Epoch 22/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1171 - loss: 0.5446 - val_loss: 0.5500 - val_mae: 0.1155 Epoch 23/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1161 - loss: 0.5457 - val_loss: 0.5559 - val_mae: 0.1135 Epoch 24/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 83ms/step - mae: 0.1135 - loss: 0.5479 - val_loss: 0.5521 - val_mae: 0.1112 Epoch 25/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1128 - loss: 0.5480 - val_loss: 0.5505 - val_mae: 0.1122 Epoch 26/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 79ms/step - mae: 0.1123 - loss: 0.5470 Idx chosen: 20 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_11.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 86ms/step - mae: 0.1123 - loss: 0.5470 - val_loss: 0.5572 - val_mae: 0.1127 Epoch 27/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1114 - loss: 0.5487 - val_loss: 0.5555 - val_mae: 0.1092 Epoch 28/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1108 - loss: 0.5492 - val_loss: 0.5569 - val_mae: 0.1110 Epoch 29/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 83ms/step - mae: 0.1104 - loss: 0.5491 - val_loss: 0.5517 - val_mae: 0.1110 Epoch 30/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1099 - loss: 0.5490 - val_loss: 0.5543 - val_mae: 0.1104 Epoch 31/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 79ms/step - mae: 0.1095 - loss: 0.5501 Idx chosen: 102 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_13.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 86ms/step - mae: 0.1095 - loss: 0.5501 - val_loss: 0.5578 - val_mae: 0.1108 Epoch 32/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1089 - loss: 0.5503 - val_loss: 0.5620 - val_mae: 0.1081 Epoch 33/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1079 - loss: 0.5509 - val_loss: 0.5618 - val_mae: 0.1067 Epoch 34/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 83ms/step - mae: 0.1067 - loss: 0.5524 - val_loss: 0.5627 - val_mae: 0.1059 Epoch 35/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1068 - loss: 0.5515 - val_loss: 0.5576 - val_mae: 0.1050 Epoch 36/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 79ms/step - mae: 0.1057 - loss: 0.5526 Idx chosen: 121 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_15.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 86ms/step - mae: 0.1057 - loss: 0.5526 - val_loss: 0.5627 - val_mae: 0.1050 Epoch 37/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1065 - loss: 0.5534 - val_loss: 0.5638 - val_mae: 0.1050 Epoch 38/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 83ms/step - mae: 0.1055 - loss: 0.5528 - val_loss: 0.5527 - val_mae: 0.1083 Epoch 39/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 20s 82ms/step - mae: 0.1056 - loss: 0.5516 - val_loss: 0.5562 - val_mae: 0.1044 Epoch 40/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1053 - loss: 0.5528 - val_loss: 0.5567 - val_mae: 0.1051 Epoch 41/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 78ms/step - mae: 0.1049 - loss: 0.5533 Idx chosen: 210 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_17.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 85ms/step - mae: 0.1049 - loss: 0.5533 - val_loss: 0.5620 - val_mae: 0.1030 Epoch 42/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 83ms/step - mae: 0.1041 - loss: 0.5534 - val_loss: 0.5650 - val_mae: 0.1052 Epoch 43/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1048 - loss: 0.5526 - val_loss: 0.5619 - val_mae: 0.1027 Epoch 44/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1037 - loss: 0.5543 - val_loss: 0.5615 - val_mae: 0.1031 Epoch 45/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1036 - loss: 0.5535 - val_loss: 0.5575 - val_mae: 0.1026 Epoch 46/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 78ms/step - mae: 0.1032 - loss: 0.5537 Idx chosen: 214 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_19.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 85ms/step - mae: 0.1032 - loss: 0.5537 - val_loss: 0.5549 - val_mae: 0.1037 Epoch 47/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 84ms/step - mae: 0.1035 - loss: 0.5539 - val_loss: 0.5597 - val_mae: 0.1031 Epoch 48/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1033 - loss: 0.5533 - val_loss: 0.5650 - val_mae: 0.1013 Epoch 49/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.1027 - loss: 0.5543 - val_loss: 0.5571 - val_mae: 0.1028 Epoch 50/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1024 - loss: 0.5548 - val_loss: 0.5592 - val_mae: 0.1018 Epoch 51/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 78ms/step - mae: 0.1025 - loss: 0.5543 Idx chosen: 74 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_21.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 85ms/step - mae: 0.1025 - loss: 0.5543 - val_loss: 0.5645 - val_mae: 0.1007 Epoch 52/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 83ms/step - mae: 0.1025 - loss: 0.5544 - val_loss: 0.5616 - val_mae: 0.1004 Epoch 53/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1014 - loss: 0.5547 - val_loss: 0.5594 - val_mae: 0.1007 Epoch 54/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1014 - loss: 0.5550 - val_loss: 0.5687 - val_mae: 0.1012 Epoch 55/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1022 - loss: 0.5551 - val_loss: 0.5572 - val_mae: 0.1018 Epoch 56/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 79ms/step - mae: 0.1015 - loss: 0.5558 Idx chosen: 202 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_23.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 86ms/step - mae: 0.1015 - loss: 0.5558 - val_loss: 0.5619 - val_mae: 0.0996 Epoch 57/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1008 - loss: 0.5550 - val_loss: 0.5614 - val_mae: 0.0996 Epoch 58/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1004 - loss: 0.5557 - val_loss: 0.5620 - val_mae: 0.0995 Epoch 59/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.1002 - loss: 0.5558 - val_loss: 0.5612 - val_mae: 0.0997 Epoch 60/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.1005 - loss: 0.5563 - val_loss: 0.5598 - val_mae: 0.1000 Epoch 61/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 79ms/step - mae: 0.1001 - loss: 0.5564 Idx chosen: 87 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_25.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 86ms/step - mae: 0.1001 - loss: 0.5564 - val_loss: 0.5606 - val_mae: 0.0998 Epoch 62/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 86ms/step - mae: 0.0998 - loss: 0.5562 - val_loss: 0.5643 - val_mae: 0.0988 Epoch 63/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.1001 - loss: 0.5556 - val_loss: 0.5657 - val_mae: 0.0985 Epoch 64/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.0998 - loss: 0.5566 - val_loss: 0.5624 - val_mae: 0.0989 Epoch 65/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.0994 - loss: 0.5564 - val_loss: 0.5576 - val_mae: 0.0999 Epoch 66/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 79ms/step - mae: 0.0993 - loss: 0.5567 Idx chosen: 116 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_27.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 86ms/step - mae: 0.0993 - loss: 0.5567 - val_loss: 0.5572 - val_mae: 0.1000 Epoch 67/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.0990 - loss: 0.5570 - val_loss: 0.5619 - val_mae: 0.0981 Epoch 68/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.0987 - loss: 0.5578 - val_loss: 0.5644 - val_mae: 0.0973 Epoch 69/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.0981 - loss: 0.5577 - val_loss: 0.5639 - val_mae: 0.0976 Epoch 70/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.0986 - loss: 0.5563 - val_loss: 0.5601 - val_mae: 0.0989 Epoch 71/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 77ms/step - mae: 0.0982 - loss: 0.5578 Idx chosen: 99 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_29.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 84ms/step - mae: 0.0982 - loss: 0.5577 - val_loss: 0.5628 - val_mae: 0.0970 Epoch 72/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.0979 - loss: 0.5569 - val_loss: 0.5637 - val_mae: 0.0968 Epoch 73/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.0979 - loss: 0.5575 - val_loss: 0.5606 - val_mae: 0.0975 Epoch 74/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.0977 - loss: 0.5572 - val_loss: 0.5628 - val_mae: 0.0967 Epoch 75/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.0975 - loss: 0.5572 - val_loss: 0.5631 - val_mae: 0.0964 Epoch 76/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 77ms/step - mae: 0.0973 - loss: 0.5580 Idx chosen: 103 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_31.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 83ms/step - mae: 0.0973 - loss: 0.5579 - val_loss: 0.5628 - val_mae: 0.0967 Epoch 77/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.0974 - loss: 0.5579 - val_loss: 0.5638 - val_mae: 0.0963 Epoch 78/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.0968 - loss: 0.5585 - val_loss: 0.5615 - val_mae: 0.0967 Epoch 79/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.0969 - loss: 0.5578 - val_loss: 0.5641 - val_mae: 0.0959 Epoch 80/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.0967 - loss: 0.5584 - val_loss: 0.5619 - val_mae: 0.0962 Epoch 81/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 77ms/step - mae: 0.0965 - loss: 0.5578 Idx chosen: 151 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_33.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 83ms/step - mae: 0.0965 - loss: 0.5578 - val_loss: 0.5651 - val_mae: 0.0957 Epoch 82/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.0965 - loss: 0.5583 - val_loss: 0.5644 - val_mae: 0.0957 Epoch 83/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.0962 - loss: 0.5584 - val_loss: 0.5649 - val_mae: 0.0954 Epoch 84/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.0962 - loss: 0.5586 - val_loss: 0.5611 - val_mae: 0.0962 Epoch 85/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.0961 - loss: 0.5582 - val_loss: 0.5638 - val_mae: 0.0956 Epoch 86/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 77ms/step - mae: 0.0961 - loss: 0.5584 Idx chosen: 130 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_35.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 83ms/step - mae: 0.0961 - loss: 0.5584 - val_loss: 0.5641 - val_mae: 0.0954 Epoch 87/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.0959 - loss: 0.5580 - val_loss: 0.5641 - val_mae: 0.0953 Epoch 88/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.0960 - loss: 0.5583 - val_loss: 0.5642 - val_mae: 0.0953 Epoch 89/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.0958 - loss: 0.5591 - val_loss: 0.5635 - val_mae: 0.0953 Epoch 90/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.0957 - loss: 0.5587 - val_loss: 0.5648 - val_mae: 0.0948 Epoch 91/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 77ms/step - mae: 0.0957 - loss: 0.5585 Idx chosen: 149 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_37.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 84ms/step - mae: 0.0957 - loss: 0.5585 - val_loss: 0.5636 - val_mae: 0.0952 Epoch 92/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.0957 - loss: 0.5593 - val_loss: 0.5642 - val_mae: 0.0950 Epoch 93/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.0957 - loss: 0.5598 - val_loss: 0.5635 - val_mae: 0.0950 Epoch 94/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.0956 - loss: 0.5587 - val_loss: 0.5641 - val_mae: 0.0950 Epoch 95/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.0955 - loss: 0.5587 - val_loss: 0.5637 - val_mae: 0.0950 Epoch 96/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 77ms/step - mae: 0.0956 - loss: 0.5585 Idx chosen: 52 </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/masked_image_modeling/masked_image_modeling_34_39.png" /></p> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code> 157/157 ━━━━━━━━━━━━━━━━━━━━ 14s 87ms/step - mae: 0.0956 - loss: 0.5585 - val_loss: 0.5643 - val_mae: 0.0950 Epoch 97/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 81ms/step - mae: 0.0956 - loss: 0.5587 - val_loss: 0.5642 - val_mae: 0.0950 Epoch 98/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 82ms/step - mae: 0.0954 - loss: 0.5586 - val_loss: 0.5639 - val_mae: 0.0950 Epoch 99/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.0954 - loss: 0.5580 - val_loss: 0.5641 - val_mae: 0.0950 Epoch 100/100 157/157 ━━━━━━━━━━━━━━━━━━━━ 13s 80ms/step - mae: 0.0955 - loss: 0.5587 - val_loss: 0.5639 - val_mae: 0.0951 40/40 ━━━━━━━━━━━━━━━━━━━━ 1s 13ms/step - mae: 0.0955 - loss: 0.5684 Loss: 0.57 MAE: 0.10 </code></pre></div> </div> <hr /> <h2 id="evaluation-with-linear-probing">Evaluation with linear probing</h2> <h3 id="extract-the-encoder-model-along-with-other-layers">Extract the encoder model along with other layers</h3> <div class="codehilite"><pre><span></span><code><span class="c1"># Extract the augmentation layers.</span> <span class="n">train_augmentation_model</span> <span class="o">=</span> <span class="n">mae_model</span><span class="o">.</span><span class="n">train_augmentation_model</span> <span class="n">test_augmentation_model</span> <span class="o">=</span> <span class="n">mae_model</span><span class="o">.</span><span class="n">test_augmentation_model</span> <span class="c1"># Extract the patchers.</span> <span class="n">patch_layer</span> <span class="o">=</span> <span class="n">mae_model</span><span class="o">.</span><span class="n">patch_layer</span> <span class="n">patch_encoder</span> <span class="o">=</span> <span class="n">mae_model</span><span class="o">.</span><span class="n">patch_encoder</span> <span class="n">patch_encoder</span><span class="o">.</span><span class="n">downstream</span> <span class="o">=</span> <span class="kc">True</span> <span class="c1"># Swtich the downstream flag to True.</span> <span class="c1"># Extract the encoder.</span> <span class="n">encoder</span> <span class="o">=</span> <span class="n">mae_model</span><span class="o">.</span><span class="n">encoder</span> <span class="c1"># Pack as a model.</span> <span class="n">downstream_model</span> <span class="o">=</span> <span class="n">keras</span><span class="o">.</span><span class="n">Sequential</span><span class="p">(</span> <span class="p">[</span> <span class="n">layers</span><span class="o">.</span><span class="n">Input</span><span class="p">((</span><span class="n">IMAGE_SIZE</span><span class="p">,</span> <span class="n">IMAGE_SIZE</span><span class="p">,</span> <span class="mi">3</span><span class="p">)),</span> <span class="n">patch_layer</span><span class="p">,</span> <span class="n">patch_encoder</span><span class="p">,</span> <span class="n">encoder</span><span class="p">,</span> <span class="n">layers</span><span class="o">.</span><span class="n">BatchNormalization</span><span class="p">(),</span> <span class="c1"># Refer to A.1 (Linear probing).</span> <span class="n">layers</span><span class="o">.</span><span class="n">GlobalAveragePooling1D</span><span class="p">(),</span> <span class="n">layers</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="n">NUM_CLASSES</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s2">&quot;softmax&quot;</span><span class="p">),</span> <span class="p">],</span> <span class="n">name</span><span class="o">=</span><span class="s2">&quot;linear_probe_model&quot;</span><span class="p">,</span> <span class="p">)</span> <span class="c1"># Only the final classification layer of the `downstream_model` should be trainable.</span> <span class="k">for</span> <span class="n">layer</span> <span class="ow">in</span> <span class="n">downstream_model</span><span class="o">.</span><span class="n">layers</span><span class="p">[:</span><span class="o">-</span><span class="mi">1</span><span class="p">]:</span> <span class="n">layer</span><span class="o">.</span><span class="n">trainable</span> <span class="o">=</span> <span class="kc">False</span> <span class="n">downstream_model</span><span class="o">.</span><span class="n">summary</span><span class="p">()</span> </code></pre></div> <pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold">Model: "linear_probe_model"</span> </pre> <pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓ ┃<span style="font-weight: bold"> Layer (type) </span>┃<span style="font-weight: bold"> Output Shape </span>┃<span style="font-weight: bold"> Param # </span>┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩ │ patches_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Patches</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>, <span style="color: #00af00; text-decoration-color: #00af00">108</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │ ├─────────────────────────────────┼───────────────────────────┼────────────┤ │ patch_encoder_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">PatchEncoder</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">22,144</span> │ ├─────────────────────────────────┼───────────────────────────┼────────────┤ │ mae_encoder (<span style="color: #0087ff; text-decoration-color: #0087ff">Functional</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">1,981,696</span> │ ├─────────────────────────────────┼───────────────────────────┼────────────┤ │ batch_normalization │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">512</span> │ │ (<span style="color: #0087ff; text-decoration-color: #0087ff">BatchNormalization</span>) │ │ │ ├─────────────────────────────────┼───────────────────────────┼────────────┤ │ global_average_pooling1d │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │ │ (<span style="color: #0087ff; text-decoration-color: #0087ff">GlobalAveragePooling1D</span>) │ │ │ ├─────────────────────────────────┼───────────────────────────┼────────────┤ │ dense_20 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">10</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">1,290</span> │ └─────────────────────────────────┴───────────────────────────┴────────────┘ </pre> <pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Total params: </span><span style="color: #00af00; text-decoration-color: #00af00">2,005,642</span> (7.65 MB) </pre> <pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">1,290</span> (5.04 KB) </pre> <pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Non-trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">2,004,352</span> (7.65 MB) </pre> <p>We are using average pooling to extract learned representations from the MAE encoder. Another approach would be to use a learnable dummy token inside the encoder during pretraining (resembling the [CLS] token). Then we can extract representations from that token during the downstream tasks.</p> <h3 id="prepare-datasets-for-linear-probing">Prepare datasets for linear probing</h3> <div class="codehilite"><pre><span></span><code><span class="k">def</span><span class="w"> </span><span class="nf">prepare_data</span><span class="p">(</span><span class="n">images</span><span class="p">,</span> <span class="n">labels</span><span class="p">,</span> <span class="n">is_train</span><span class="o">=</span><span class="kc">True</span><span class="p">):</span> <span class="k">if</span> <span class="n">is_train</span><span class="p">:</span> <span class="n">augmentation_model</span> <span class="o">=</span> <span class="n">train_augmentation_model</span> <span class="k">else</span><span class="p">:</span> <span class="n">augmentation_model</span> <span class="o">=</span> <span class="n">test_augmentation_model</span> <span class="n">dataset</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">data</span><span class="o">.</span><span class="n">Dataset</span><span class="o">.</span><span class="n">from_tensor_slices</span><span class="p">((</span><span class="n">images</span><span class="p">,</span> <span class="n">labels</span><span class="p">))</span> <span class="k">if</span> <span class="n">is_train</span><span class="p">:</span> <span class="n">dataset</span> <span class="o">=</span> <span class="n">dataset</span><span class="o">.</span><span class="n">shuffle</span><span class="p">(</span><span class="n">BUFFER_SIZE</span><span class="p">)</span> <span class="n">dataset</span> <span class="o">=</span> <span class="n">dataset</span><span class="o">.</span><span class="n">batch</span><span class="p">(</span><span class="n">BATCH_SIZE</span><span class="p">)</span><span class="o">.</span><span class="n">map</span><span class="p">(</span> <span class="k">lambda</span> <span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">:</span> <span class="p">(</span><span class="n">augmentation_model</span><span class="p">(</span><span class="n">x</span><span class="p">),</span> <span class="n">y</span><span class="p">),</span> <span class="n">num_parallel_calls</span><span class="o">=</span><span class="n">AUTO</span> <span class="p">)</span> <span class="k">return</span> <span class="n">dataset</span><span class="o">.</span><span class="n">prefetch</span><span class="p">(</span><span class="n">AUTO</span><span class="p">)</span> <span class="n">train_ds</span> <span class="o">=</span> <span class="n">prepare_data</span><span class="p">(</span><span class="n">x_train</span><span class="p">,</span> <span class="n">y_train</span><span class="p">)</span> <span class="n">val_ds</span> <span class="o">=</span> <span class="n">prepare_data</span><span class="p">(</span><span class="n">x_train</span><span class="p">,</span> <span class="n">y_train</span><span class="p">,</span> <span class="n">is_train</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span> <span class="n">test_ds</span> <span class="o">=</span> <span class="n">prepare_data</span><span class="p">(</span><span class="n">x_test</span><span class="p">,</span> <span class="n">y_test</span><span class="p">,</span> <span class="n">is_train</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span> </code></pre></div> <h3 id="perform-linear-probing">Perform linear probing</h3> <div class="codehilite"><pre><span></span><code><span class="n">linear_probe_epochs</span> <span class="o">=</span> <span class="mi">50</span> <span class="n">linear_prob_lr</span> <span class="o">=</span> <span class="mf">0.1</span> <span class="n">warm_epoch_percentage</span> <span class="o">=</span> <span class="mf">0.1</span> <span class="n">steps</span> <span class="o">=</span> <span class="nb">int</span><span class="p">((</span><span class="nb">len</span><span class="p">(</span><span class="n">x_train</span><span class="p">)</span> <span class="o">//</span> <span class="n">BATCH_SIZE</span><span class="p">)</span> <span class="o">*</span> <span class="n">linear_probe_epochs</span><span class="p">)</span> <span class="n">warmup_steps</span> <span class="o">=</span> <span class="nb">int</span><span class="p">(</span><span class="n">steps</span> <span class="o">*</span> <span class="n">warm_epoch_percentage</span><span class="p">)</span> <span class="n">scheduled_lrs</span> <span class="o">=</span> <span class="n">WarmUpCosine</span><span class="p">(</span> <span class="n">learning_rate_base</span><span class="o">=</span><span class="n">linear_prob_lr</span><span class="p">,</span> <span class="n">total_steps</span><span class="o">=</span><span class="n">steps</span><span class="p">,</span> <span class="n">warmup_learning_rate</span><span class="o">=</span><span class="mf">0.0</span><span class="p">,</span> <span class="n">warmup_steps</span><span class="o">=</span><span class="n">warmup_steps</span><span class="p">,</span> <span class="p">)</span> <span class="n">optimizer</span> <span class="o">=</span> <span class="n">keras</span><span class="o">.</span><span class="n">optimizers</span><span class="o">.</span><span class="n">SGD</span><span class="p">(</span><span class="n">learning_rate</span><span class="o">=</span><span class="n">scheduled_lrs</span><span class="p">,</span> <span class="n">momentum</span><span class="o">=</span><span class="mf">0.9</span><span class="p">)</span> <span class="n">downstream_model</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span> <span class="n">optimizer</span><span class="o">=</span><span class="n">optimizer</span><span class="p">,</span> <span class="n">loss</span><span class="o">=</span><span class="s2">&quot;sparse_categorical_crossentropy&quot;</span><span class="p">,</span> <span class="n">metrics</span><span class="o">=</span><span class="p">[</span><span class="s2">&quot;accuracy&quot;</span><span class="p">]</span> <span class="p">)</span> <span class="n">downstream_model</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">train_ds</span><span class="p">,</span> <span class="n">validation_data</span><span class="o">=</span><span class="n">val_ds</span><span class="p">,</span> <span class="n">epochs</span><span class="o">=</span><span class="n">linear_probe_epochs</span><span class="p">)</span> <span class="n">loss</span><span class="p">,</span> <span class="n">accuracy</span> <span class="o">=</span> <span class="n">downstream_model</span><span class="o">.</span><span class="n">evaluate</span><span class="p">(</span><span class="n">test_ds</span><span class="p">)</span> <span class="n">accuracy</span> <span class="o">=</span> <span class="nb">round</span><span class="p">(</span><span class="n">accuracy</span> <span class="o">*</span> <span class="mi">100</span><span class="p">,</span> <span class="mi">2</span><span class="p">)</span> <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Accuracy on the test set: </span><span class="si">{</span><span class="n">accuracy</span><span class="si">}</span><span class="s2">%.&quot;</span><span class="p">)</span> </code></pre></div> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code>Epoch 1/50 7/157 ━━━━━━━━━━━━━━━━━━━━ 3s 21ms/step - accuracy: 0.1183 - loss: 3.3939 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1700264823.481598 64012 device_compiler.h:187] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process. 157/157 ━━━━━━━━━━━━━━━━━━━━ 70s 242ms/step - accuracy: 0.1967 - loss: 2.6073 - val_accuracy: 0.3631 - val_loss: 1.7846 Epoch 2/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 35ms/step - accuracy: 0.3521 - loss: 1.8063 - val_accuracy: 0.3677 - val_loss: 1.7301 Epoch 3/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.3580 - loss: 1.7580 - val_accuracy: 0.3649 - val_loss: 1.7326 Epoch 4/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.3617 - loss: 1.7471 - val_accuracy: 0.3810 - val_loss: 1.7353 Epoch 5/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 6s 35ms/step - accuracy: 0.3547 - loss: 1.7728 - val_accuracy: 0.3526 - val_loss: 1.8496 Epoch 6/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 6s 35ms/step - accuracy: 0.3546 - loss: 1.7866 - val_accuracy: 0.3896 - val_loss: 1.7583 Epoch 7/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 6s 37ms/step - accuracy: 0.3587 - loss: 1.7924 - val_accuracy: 0.3674 - val_loss: 1.7729 Epoch 8/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 6s 38ms/step - accuracy: 0.3616 - loss: 1.7912 - val_accuracy: 0.3685 - val_loss: 1.7928 Epoch 9/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 6s 36ms/step - accuracy: 0.3707 - loss: 1.7543 - val_accuracy: 0.3568 - val_loss: 1.7943 Epoch 10/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.3719 - loss: 1.7451 - val_accuracy: 0.3859 - val_loss: 1.7230 Epoch 11/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.3781 - loss: 1.7384 - val_accuracy: 0.3711 - val_loss: 1.7608 Epoch 12/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 6s 35ms/step - accuracy: 0.3791 - loss: 1.7249 - val_accuracy: 0.4004 - val_loss: 1.6961 Epoch 13/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.3818 - loss: 1.7303 - val_accuracy: 0.3501 - val_loss: 1.8506 Epoch 14/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.3841 - loss: 1.7179 - val_accuracy: 0.3810 - val_loss: 1.8033 Epoch 15/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.3818 - loss: 1.7172 - val_accuracy: 0.4168 - val_loss: 1.6507 Epoch 16/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 6s 36ms/step - accuracy: 0.3851 - loss: 1.7059 - val_accuracy: 0.3806 - val_loss: 1.7581 Epoch 17/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.3747 - loss: 1.7356 - val_accuracy: 0.4094 - val_loss: 1.6466 Epoch 18/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 35ms/step - accuracy: 0.3828 - loss: 1.7221 - val_accuracy: 0.4015 - val_loss: 1.6757 Epoch 19/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.3889 - loss: 1.6939 - val_accuracy: 0.4102 - val_loss: 1.6392 Epoch 20/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.3943 - loss: 1.6857 - val_accuracy: 0.4028 - val_loss: 1.6518 Epoch 21/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.3870 - loss: 1.6970 - val_accuracy: 0.3949 - val_loss: 1.7283 Epoch 22/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.3893 - loss: 1.6838 - val_accuracy: 0.4207 - val_loss: 1.6292 Epoch 23/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 35ms/step - accuracy: 0.4005 - loss: 1.6606 - val_accuracy: 0.4152 - val_loss: 1.6320 Epoch 24/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.3978 - loss: 1.6556 - val_accuracy: 0.4042 - val_loss: 1.6657 Epoch 25/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4029 - loss: 1.6464 - val_accuracy: 0.4198 - val_loss: 1.6033 Epoch 26/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.3974 - loss: 1.6638 - val_accuracy: 0.4278 - val_loss: 1.5731 Epoch 27/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 6s 37ms/step - accuracy: 0.4035 - loss: 1.6370 - val_accuracy: 0.4302 - val_loss: 1.5663 Epoch 28/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4027 - loss: 1.6349 - val_accuracy: 0.4458 - val_loss: 1.5349 Epoch 29/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4054 - loss: 1.6196 - val_accuracy: 0.4349 - val_loss: 1.5709 Epoch 30/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 35ms/step - accuracy: 0.4070 - loss: 1.6061 - val_accuracy: 0.4297 - val_loss: 1.5578 Epoch 31/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4105 - loss: 1.6172 - val_accuracy: 0.4250 - val_loss: 1.5735 Epoch 32/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4197 - loss: 1.5960 - val_accuracy: 0.4259 - val_loss: 1.5677 Epoch 33/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4156 - loss: 1.5989 - val_accuracy: 0.4400 - val_loss: 1.5395 Epoch 34/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 35ms/step - accuracy: 0.4214 - loss: 1.5862 - val_accuracy: 0.4486 - val_loss: 1.5237 Epoch 35/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4208 - loss: 1.5763 - val_accuracy: 0.4188 - val_loss: 1.5925 Epoch 36/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4227 - loss: 1.5803 - val_accuracy: 0.4525 - val_loss: 1.5174 Epoch 37/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4267 - loss: 1.5700 - val_accuracy: 0.4463 - val_loss: 1.5330 Epoch 38/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 6s 37ms/step - accuracy: 0.4283 - loss: 1.5649 - val_accuracy: 0.4348 - val_loss: 1.5482 Epoch 39/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4332 - loss: 1.5581 - val_accuracy: 0.4486 - val_loss: 1.5251 Epoch 40/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4290 - loss: 1.5596 - val_accuracy: 0.4489 - val_loss: 1.5221 Epoch 41/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4318 - loss: 1.5589 - val_accuracy: 0.4494 - val_loss: 1.5202 Epoch 42/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4317 - loss: 1.5514 - val_accuracy: 0.4505 - val_loss: 1.5184 Epoch 43/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4353 - loss: 1.5504 - val_accuracy: 0.4561 - val_loss: 1.5081 Epoch 44/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4369 - loss: 1.5510 - val_accuracy: 0.4581 - val_loss: 1.5092 Epoch 45/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 6s 35ms/step - accuracy: 0.4379 - loss: 1.5428 - val_accuracy: 0.4555 - val_loss: 1.5099 Epoch 46/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4421 - loss: 1.5475 - val_accuracy: 0.4579 - val_loss: 1.5073 Epoch 47/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4434 - loss: 1.5390 - val_accuracy: 0.4593 - val_loss: 1.5052 Epoch 48/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 34ms/step - accuracy: 0.4418 - loss: 1.5373 - val_accuracy: 0.4600 - val_loss: 1.5038 Epoch 49/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 6s 38ms/step - accuracy: 0.4400 - loss: 1.5367 - val_accuracy: 0.4596 - val_loss: 1.5045 Epoch 50/50 157/157 ━━━━━━━━━━━━━━━━━━━━ 5s 35ms/step - accuracy: 0.4448 - loss: 1.5321 - val_accuracy: 0.4595 - val_loss: 1.5048 40/40 ━━━━━━━━━━━━━━━━━━━━ 3s 71ms/step - accuracy: 0.4496 - loss: 1.5088 Accuracy on the test set: 44.66%. </code></pre></div> </div> <p>We believe that with a more sophisticated hyperparameter tuning process and a longer pretraining it is possible to improve this performance further. For comparison, we took the encoder architecture and <a href="https://github.com/ariG23498/mae-scalable-vision-learners/blob/master/regular-classification.ipynb">trained it from scratch</a> in a fully supervised manner. This gave us ~76% test top-1 accuracy. The authors of MAE demonstrates strong performance on the ImageNet-1k dataset as well as other downstream tasks like object detection and semantic segmentation.</p> <hr /> <h2 id="final-notes">Final notes</h2> <p>We refer the interested readers to other examples on self-supervised learning present on keras.io:</p> <ul> <li><a href="https://keras.io/examples/vision/semisupervised_simclr/">SimCLR</a></li> <li><a href="https://keras.io/examples/vision/nnclr">NNCLR</a></li> <li><a href="https://keras.io/examples/vision/simsiam">SimSiam</a></li> </ul> <p>This idea of using BERT flavored pretraining in computer vision was also explored in <a href="https://arxiv.org/abs/1906.02940">Selfie</a>, but it could not demonstrate strong results. Another concurrent work that explores the idea of masked image modeling is <a href="https://arxiv.org/abs/2111.09886">SimMIM</a>. Finally, as a fun fact, we, the authors of this example also explored the idea of <a href="https://i.ibb.co/k5CpwDX/image.png">"reconstruction as a pretext task"</a> in 2020 but we could not prevent the network from representation collapse, and hence we did not get strong downstream performance.</p> <p>We would like to thank <a href="http://xinleic.xyz/">Xinlei Chen</a> (one of the authors of MAE) for helpful discussions. We are grateful to <a href="https://jarvislabs.ai/">JarvisLabs</a> and <a href="https://developers.google.com/programs/experts/">Google Developers Experts</a> program for helping with GPU credits.</p> </div> <div class='k-outline'> <div class='k-outline-depth-1'> <a href='#masked-image-modeling-with-autoencoders'>Masked image modeling with Autoencoders</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#introduction'>Introduction</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#imports'>Imports</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#hyperparameters-for-pretraining'>Hyperparameters for pretraining</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#load-and-prepare-the-cifar10-dataset'>Load and prepare the CIFAR-10 dataset</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#data-augmentation'>Data augmentation</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#a-layer-for-extracting-patches-from-images'>A layer for extracting patches from images</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#patch-encoding-with-masking'>Patch encoding with masking</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#mlp'>MLP</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#mae-encoder'>MAE encoder</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#mae-decoder'>MAE decoder</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#mae-trainer'>MAE trainer</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#model-initialization'>Model initialization</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#training-callbacks'>Training callbacks</a> </div> <div class='k-outline-depth-3'> <a href='#visualization-callback'>Visualization callback</a> </div> <div class='k-outline-depth-3'> <a href='#learning-rate-scheduler'>Learning rate scheduler</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#model-compilation-and-training'>Model compilation and training</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#evaluation-with-linear-probing'>Evaluation with linear probing</a> </div> <div class='k-outline-depth-3'> <a href='#extract-the-encoder-model-along-with-other-layers'>Extract the encoder model along with other layers</a> </div> <div class='k-outline-depth-3'> <a href='#prepare-datasets-for-linear-probing'>Prepare datasets for linear probing</a> </div> <div class='k-outline-depth-3'> <a href='#perform-linear-probing'>Perform linear probing</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#final-notes'>Final notes</a> </div> </div> </div> </div> </div> </body> <footer style="float: left; width: 100%; padding: 1em; border-top: solid 1px #bbb;"> <a href="https://policies.google.com/terms">Terms</a> | <a href="https://policies.google.com/privacy">Privacy</a> </footer> </html>

Pages: 1 2 3 4 5 6 7 8 9 10