CINXE.COM

Highly accurate boundaries segmentation using BASNet

<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="description" content="Keras documentation"> <meta name="author" content="Keras Team"> <link rel="shortcut icon" href="https://keras.io/img/favicon.ico"> <link rel="canonical" href="https://keras.io/examples/vision/basnet_segmentation/" /> <!-- Social --> <meta property="og:title" content="Keras documentation: Highly accurate boundaries segmentation using BASNet"> <meta property="og:image" content="https://keras.io/img/logo-k-keras-wb.png"> <meta name="twitter:title" content="Keras documentation: Highly accurate boundaries segmentation using BASNet"> <meta name="twitter:image" content="https://keras.io/img/k-keras-social.png"> <meta name="twitter:card" content="summary"> <title>Highly accurate boundaries segmentation using BASNet</title> <!-- Bootstrap core CSS --> <link href="/css/bootstrap.min.css" rel="stylesheet"> <!-- Custom fonts for this template --> <link href="https://fonts.googleapis.com/css2?family=Open+Sans:wght@400;600;700;800&display=swap" rel="stylesheet"> <!-- Custom styles for this template --> <link href="/css/docs.css" rel="stylesheet"> <link href="/css/monokai.css" rel="stylesheet"> <!-- Google Tag Manager --> <script>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0], j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src= 'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f); })(window,document,'script','dataLayer','GTM-5DNGF4N'); </script> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-175165319-128', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Tag Manager --> <script async defer src="https://buttons.github.io/buttons.js"></script> </head> <body> <!-- Google Tag Manager (noscript) --> <noscript><iframe src="https://www.googletagmanager.com/ns.html?id=GTM-5DNGF4N" height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript> <!-- End Google Tag Manager (noscript) --> <div class='k-page'> <div class="k-nav" id="nav-menu"> <a href='/'><img src='/img/logo-small.png' class='logo-small' /></a> <div class="nav flex-column nav-pills" role="tablist" aria-orientation="vertical"> <a class="nav-link" href="/about/" role="tab" aria-selected="">About Keras</a> <a class="nav-link" href="/getting_started/" role="tab" aria-selected="">Getting started</a> <a class="nav-link" href="/guides/" role="tab" aria-selected="">Developer guides</a> <a class="nav-link" href="/api/" role="tab" aria-selected="">Keras 3 API documentation</a> <a class="nav-link" href="/2.18/api/" role="tab" aria-selected="">Keras 2 API documentation</a> <a class="nav-link active" href="/examples/" role="tab" aria-selected="">Code examples</a> <a class="nav-sublink active" href="/examples/vision/">Computer Vision</a> <a class="nav-sublink2" href="/examples/vision/image_classification_from_scratch/">Image classification from scratch</a> <a class="nav-sublink2" href="/examples/vision/mnist_convnet/">Simple MNIST convnet</a> <a class="nav-sublink2" href="/examples/vision/image_classification_efficientnet_fine_tuning/">Image classification via fine-tuning with EfficientNet</a> <a class="nav-sublink2" href="/examples/vision/image_classification_with_vision_transformer/">Image classification with Vision Transformer</a> <a class="nav-sublink2" href="/examples/vision/attention_mil_classification/">Classification using Attention-based Deep Multiple Instance Learning</a> <a class="nav-sublink2" href="/examples/vision/mlp_image_classification/">Image classification with modern MLP models</a> <a class="nav-sublink2" href="/examples/vision/mobilevit/">A mobile-friendly Transformer-based model for image classification</a> <a class="nav-sublink2" href="/examples/vision/xray_classification_with_tpus/">Pneumonia Classification on TPU</a> <a class="nav-sublink2" href="/examples/vision/cct/">Compact Convolutional Transformers</a> <a class="nav-sublink2" href="/examples/vision/convmixer/">Image classification with ConvMixer</a> <a class="nav-sublink2" href="/examples/vision/eanet/">Image classification with EANet (External Attention Transformer)</a> <a class="nav-sublink2" href="/examples/vision/involution/">Involutional neural networks</a> <a class="nav-sublink2" href="/examples/vision/perceiver_image_classification/">Image classification with Perceiver</a> <a class="nav-sublink2" href="/examples/vision/reptile/">Few-Shot learning with Reptile</a> <a class="nav-sublink2" href="/examples/vision/semisupervised_simclr/">Semi-supervised image classification using contrastive pretraining with SimCLR</a> <a class="nav-sublink2" href="/examples/vision/swin_transformers/">Image classification with Swin Transformers</a> <a class="nav-sublink2" href="/examples/vision/vit_small_ds/">Train a Vision Transformer on small datasets</a> <a class="nav-sublink2" href="/examples/vision/shiftvit/">A Vision Transformer without Attention</a> <a class="nav-sublink2" href="/examples/vision/image_classification_using_global_context_vision_transformer/">Image Classification using Global Context Vision Transformer</a> <a class="nav-sublink2" href="/examples/vision/oxford_pets_image_segmentation/">Image segmentation with a U-Net-like architecture</a> <a class="nav-sublink2" href="/examples/vision/deeplabv3_plus/">Multiclass semantic segmentation using DeepLabV3+</a> <a class="nav-sublink2 active" href="/examples/vision/basnet_segmentation/">Highly accurate boundaries segmentation using BASNet</a> <a class="nav-sublink2" href="/examples/vision/fully_convolutional_network/">Image Segmentation using Composable Fully-Convolutional Networks</a> <a class="nav-sublink2" href="/examples/vision/retinanet/">Object Detection with RetinaNet</a> <a class="nav-sublink2" href="/examples/vision/keypoint_detection/">Keypoint Detection with Transfer Learning</a> <a class="nav-sublink2" href="/examples/vision/object_detection_using_vision_transformer/">Object detection with Vision Transformers</a> <a class="nav-sublink2" href="/examples/vision/3D_image_classification/">3D image classification from CT scans</a> <a class="nav-sublink2" href="/examples/vision/depth_estimation/">Monocular depth estimation</a> <a class="nav-sublink2" href="/examples/vision/nerf/">3D volumetric rendering with NeRF</a> <a class="nav-sublink2" href="/examples/vision/pointnet_segmentation/">Point cloud segmentation with PointNet</a> <a class="nav-sublink2" href="/examples/vision/pointnet/">Point cloud classification</a> <a class="nav-sublink2" href="/examples/vision/captcha_ocr/">OCR model for reading Captchas</a> <a class="nav-sublink2" href="/examples/vision/handwriting_recognition/">Handwriting recognition</a> <a class="nav-sublink2" href="/examples/vision/autoencoder/">Convolutional autoencoder for image denoising</a> <a class="nav-sublink2" href="/examples/vision/mirnet/">Low-light image enhancement using MIRNet</a> <a class="nav-sublink2" href="/examples/vision/super_resolution_sub_pixel/">Image Super-Resolution using an Efficient Sub-Pixel CNN</a> <a class="nav-sublink2" href="/examples/vision/edsr/">Enhanced Deep Residual Networks for single-image super-resolution</a> <a class="nav-sublink2" href="/examples/vision/zero_dce/">Zero-DCE for low-light image enhancement</a> <a class="nav-sublink2" href="/examples/vision/cutmix/">CutMix data augmentation for image classification</a> <a class="nav-sublink2" href="/examples/vision/mixup/">MixUp augmentation for image classification</a> <a class="nav-sublink2" href="/examples/vision/randaugment/">RandAugment for Image Classification for Improved Robustness</a> <a class="nav-sublink2" href="/examples/vision/image_captioning/">Image captioning</a> <a class="nav-sublink2" href="/examples/vision/nl_image_search/">Natural language image search with a Dual Encoder</a> <a class="nav-sublink2" href="/examples/vision/visualizing_what_convnets_learn/">Visualizing what convnets learn</a> <a class="nav-sublink2" href="/examples/vision/integrated_gradients/">Model interpretability with Integrated Gradients</a> <a class="nav-sublink2" href="/examples/vision/probing_vits/">Investigating Vision Transformer representations</a> <a class="nav-sublink2" href="/examples/vision/grad_cam/">Grad-CAM class activation visualization</a> <a class="nav-sublink2" href="/examples/vision/near_dup_search/">Near-duplicate image search</a> <a class="nav-sublink2" href="/examples/vision/semantic_image_clustering/">Semantic Image Clustering</a> <a class="nav-sublink2" href="/examples/vision/siamese_contrastive/">Image similarity estimation using a Siamese Network with a contrastive loss</a> <a class="nav-sublink2" href="/examples/vision/siamese_network/">Image similarity estimation using a Siamese Network with a triplet loss</a> <a class="nav-sublink2" href="/examples/vision/metric_learning/">Metric learning for image similarity search</a> <a class="nav-sublink2" href="/examples/vision/metric_learning_tf_similarity/">Metric learning for image similarity search using TensorFlow Similarity</a> <a class="nav-sublink2" href="/examples/vision/nnclr/">Self-supervised contrastive learning with NNCLR</a> <a class="nav-sublink2" href="/examples/vision/video_classification/">Video Classification with a CNN-RNN Architecture</a> <a class="nav-sublink2" href="/examples/vision/conv_lstm/">Next-Frame Video Prediction with Convolutional LSTMs</a> <a class="nav-sublink2" href="/examples/vision/video_transformers/">Video Classification with Transformers</a> <a class="nav-sublink2" href="/examples/vision/vivit/">Video Vision Transformer</a> <a class="nav-sublink2" href="/examples/vision/bit/">Image Classification using BigTransfer (BiT)</a> <a class="nav-sublink2" href="/examples/vision/gradient_centralization/">Gradient Centralization for Better Training Performance</a> <a class="nav-sublink2" href="/examples/vision/token_learner/">Learning to tokenize in Vision Transformers</a> <a class="nav-sublink2" href="/examples/vision/knowledge_distillation/">Knowledge Distillation</a> <a class="nav-sublink2" href="/examples/vision/fixres/">FixRes: Fixing train-test resolution discrepancy</a> <a class="nav-sublink2" href="/examples/vision/cait/">Class Attention Image Transformers with LayerScale</a> <a class="nav-sublink2" href="/examples/vision/patch_convnet/">Augmenting convnets with aggregated attention</a> <a class="nav-sublink2" href="/examples/vision/learnable_resizer/">Learning to Resize</a> <a class="nav-sublink2" href="/examples/vision/adamatch/">Semi-supervision and domain adaptation with AdaMatch</a> <a class="nav-sublink2" href="/examples/vision/barlow_twins/">Barlow Twins for Contrastive SSL</a> <a class="nav-sublink2" href="/examples/vision/consistency_training/">Consistency training with supervision</a> <a class="nav-sublink2" href="/examples/vision/deit/">Distilling Vision Transformers</a> <a class="nav-sublink2" href="/examples/vision/focal_modulation_network/">Focal Modulation: A replacement for Self-Attention</a> <a class="nav-sublink2" href="/examples/vision/forwardforward/">Using the Forward-Forward Algorithm for Image Classification</a> <a class="nav-sublink2" href="/examples/vision/masked_image_modeling/">Masked image modeling with Autoencoders</a> <a class="nav-sublink2" href="/examples/vision/sam/">Segment Anything Model with 🤗Transformers</a> <a class="nav-sublink2" href="/examples/vision/segformer/">Semantic segmentation with SegFormer and Hugging Face Transformers</a> <a class="nav-sublink2" href="/examples/vision/simsiam/">Self-supervised contrastive learning with SimSiam</a> <a class="nav-sublink2" href="/examples/vision/supervised-contrastive-learning/">Supervised Contrastive Learning</a> <a class="nav-sublink2" href="/examples/vision/temporal_latent_bottleneck/">When Recurrence meets Transformers</a> <a class="nav-sublink2" href="/examples/vision/yolov8/">Efficient Object Detection with YOLOV8 and KerasCV</a> <a class="nav-sublink" href="/examples/nlp/">Natural Language Processing</a> <a class="nav-sublink" href="/examples/structured_data/">Structured Data</a> <a class="nav-sublink" href="/examples/timeseries/">Timeseries</a> <a class="nav-sublink" href="/examples/generative/">Generative Deep Learning</a> <a class="nav-sublink" href="/examples/audio/">Audio Data</a> <a class="nav-sublink" href="/examples/rl/">Reinforcement Learning</a> <a class="nav-sublink" href="/examples/graph/">Graph Data</a> <a class="nav-sublink" href="/examples/keras_recipes/">Quick Keras Recipes</a> <a class="nav-link" href="/keras_tuner/" role="tab" aria-selected="">KerasTuner: Hyperparameter Tuning</a> <a class="nav-link" href="/keras_hub/" role="tab" aria-selected="">KerasHub: Pretrained Models</a> <a class="nav-link" href="/keras_cv/" role="tab" aria-selected="">KerasCV: Computer Vision Workflows</a> <a class="nav-link" href="/keras_nlp/" role="tab" aria-selected="">KerasNLP: Natural Language Workflows</a> </div> </div> <div class='k-main'> <div class='k-main-top'> <script> function displayDropdownMenu() { e = document.getElementById("nav-menu"); if (e.style.display == "block") { e.style.display = "none"; } else { e.style.display = "block"; document.getElementById("dropdown-nav").style.display = "block"; } } function resetMobileUI() { if (window.innerWidth <= 840) { document.getElementById("nav-menu").style.display = "none"; document.getElementById("dropdown-nav").style.display = "block"; } else { document.getElementById("nav-menu").style.display = "block"; document.getElementById("dropdown-nav").style.display = "none"; } var navmenu = document.getElementById("nav-menu"); var menuheight = navmenu.clientHeight; var kmain = document.getElementById("k-main-id"); kmain.style.minHeight = (menuheight + 100) + 'px'; } window.onresize = resetMobileUI; window.addEventListener("load", (event) => { resetMobileUI() }); </script> <div id='dropdown-nav' onclick="displayDropdownMenu();"> <svg viewBox="-20 -20 120 120" width="60" height="60"> <rect width="100" height="20"></rect> <rect y="30" width="100" height="20"></rect> <rect y="60" width="100" height="20"></rect> </svg> </div> <form class="bd-search d-flex align-items-center k-search-form" id="search-form"> <input type="search" class="k-search-input" id="search-input" placeholder="Search Keras documentation..." aria-label="Search Keras documentation..." autocomplete="off"> <button class="k-search-btn"> <svg width="13" height="13" viewBox="0 0 13 13"><title>search</title><path d="m4.8495 7.8226c0.82666 0 1.5262-0.29146 2.0985-0.87438 0.57232-0.58292 0.86378-1.2877 0.87438-2.1144 0.010599-0.82666-0.28086-1.5262-0.87438-2.0985-0.59352-0.57232-1.293-0.86378-2.0985-0.87438-0.8055-0.010599-1.5103 0.28086-2.1144 0.87438-0.60414 0.59352-0.8956 1.293-0.87438 2.0985 0.021197 0.8055 0.31266 1.5103 0.87438 2.1144 0.56172 0.60414 1.2665 0.8956 2.1144 0.87438zm4.4695 0.2115 3.681 3.6819-1.259 1.284-3.6817-3.7 0.0019784-0.69479-0.090043-0.098846c-0.87973 0.76087-1.92 1.1413-3.1207 1.1413-1.3553 0-2.5025-0.46363-3.4417-1.3909s-1.4088-2.0686-1.4088-3.4239c0-1.3553 0.4696-2.4966 1.4088-3.4239 0.9392-0.92727 2.0864-1.3969 3.4417-1.4088 1.3553-0.011889 2.4906 0.45771 3.406 1.4088 0.9154 0.95107 1.379 2.0924 1.3909 3.4239 0 1.2126-0.38043 2.2588-1.1413 3.1385l0.098834 0.090049z"></path></svg> </button> </form> <script> var form = document.getElementById('search-form'); form.onsubmit = function(e) { e.preventDefault(); var query = document.getElementById('search-input').value; window.location.href = '/search.html?query=' + query; return False } </script> </div> <div class='k-main-inner' id='k-main-id'> <div class='k-location-slug'> <span class="k-location-slug-pointer">►</span> <a href='/examples/'>Code examples</a> / <a href='/examples/vision/'>Computer Vision</a> / Highly accurate boundaries segmentation using BASNet </div> <div class='k-content'> <h1 id="highly-accurate-boundaries-segmentation-using-basnet">Highly accurate boundaries segmentation using BASNet</h1> <p><strong>Author:</strong> <a href="https://github.com/hamidriasat">Hamid Ali</a><br> <strong>Date created:</strong> 2023/05/30<br> <strong>Last modified:</strong> 2023/07/13<br> <strong>Description:</strong> Boundaries aware segmentation model trained on the DUTS dataset.</p> <div class='example_version_banner keras_2'>ⓘ This example uses Keras 2</div> <p><img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> <a href="https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/basnet_segmentation.ipynb"><strong>View in Colab</strong></a> <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> <a href="https://github.com/keras-team/keras-io/blob/master/examples/vision/basnet_segmentation.py"><strong>GitHub source</strong></a></p> <hr /> <h2 id="introduction">Introduction</h2> <p>Deep semantic segmentation algorithms have improved a lot recently, but still fails to correctly predict pixels around object boundaries. In this example we implement <strong>Boundary-Aware Segmentation Network (BASNet)</strong>, using two stage predict and refine architecture, and a hybrid loss it can predict highly accurate boundaries and fine structures for image segmentation.</p> <h3 id="references">References:</h3> <ul> <li><a href="https://arxiv.org/abs/2101.04704">Boundary-Aware Segmentation Network for Mobile and Web Applications</a></li> <li><a href="https://github.com/hamidriasat/BASNet/tree/basnet_keras">BASNet Keras Implementation</a></li> <li><a href="https://openaccess.thecvf.com/content_cvpr_2017/html/Wang_Learning_to_Detect_CVPR_2017_paper.html">Learning to Detect Salient Objects with Image-level Supervision</a></li> </ul> <hr /> <h2 id="download-the-data">Download the Data</h2> <p>We will use the <a href="http://saliencydetection.net/duts/">DUTS-TE</a> dataset for training. It has 5,019 images but we will use 140 for training and validation to save notebook running time. DUTS is relatively large salient object segmentation dataset. which contain diversified textures and structures common to real-world images in both foreground and background.</p> <div class="codehilite"><pre><span></span><code><span class="err">!</span><span class="n">wget</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">saliencydetection</span><span class="o">.</span><span class="n">net</span><span class="o">/</span><span class="n">duts</span><span class="o">/</span><span class="n">download</span><span class="o">/</span><span class="n">DUTS</span><span class="o">-</span><span class="n">TE</span><span class="o">.</span><span class="n">zip</span> <span class="err">!</span><span class="n">unzip</span> <span class="o">-</span><span class="n">q</span> <span class="n">DUTS</span><span class="o">-</span><span class="n">TE</span><span class="o">.</span><span class="n">zip</span> </code></pre></div> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code>--2023-08-06 19:07:37-- http://saliencydetection.net/duts/download/DUTS-TE.zip Resolving saliencydetection.net (saliencydetection.net)... 36.55.239.177 Connecting to saliencydetection.net (saliencydetection.net)|36.55.239.177|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 139799089 (133M) [application/zip] Saving to: ‘DUTS-TE.zip’ </code></pre></div> </div> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code>DUTS-TE.zip 100%[===================&gt;] 133.32M 1.76MB/s in 77s </code></pre></div> </div> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code>2023-08-06 19:08:55 (1.73 MB/s) - ‘DUTS-TE.zip’ saved [139799089/139799089] </code></pre></div> </div> <div class="codehilite"><pre><span></span><code><span class="kn">import</span> <span class="nn">os</span> <span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span> <span class="kn">from</span> <span class="nn">glob</span> <span class="kn">import</span> <span class="n">glob</span> <span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="nn">plt</span> <span class="kn">import</span> <span class="nn">keras_cv</span> <span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span> <span class="kn">from</span> <span class="nn">tensorflow</span> <span class="kn">import</span> <span class="n">keras</span> <span class="kn">from</span> <span class="nn">tensorflow.keras</span> <span class="kn">import</span> <span class="n">layers</span><span class="p">,</span> <span class="n">backend</span> </code></pre></div> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code>Using TensorFlow backend </code></pre></div> </div> <hr /> <h2 id="define-hyperparameters">Define Hyperparameters</h2> <div class="codehilite"><pre><span></span><code><span class="n">IMAGE_SIZE</span> <span class="o">=</span> <span class="mi">288</span> <span class="n">BATCH_SIZE</span> <span class="o">=</span> <span class="mi">4</span> <span class="n">OUT_CLASSES</span> <span class="o">=</span> <span class="mi">1</span> <span class="n">TRAIN_SPLIT_RATIO</span> <span class="o">=</span> <span class="mf">0.90</span> <span class="n">DATA_DIR</span> <span class="o">=</span> <span class="s2">&quot;./DUTS-TE/&quot;</span> </code></pre></div> <hr /> <h2 id="create-tensorflow-dataset">Create TensorFlow Dataset</h2> <p>We will use <code>load_paths()</code> to load and split 140 paths into train and validation set, and <code>load_dataset()</code> to convert paths into <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset"><code>tf.data.Dataset</code></a> object.</p> <div class="codehilite"><pre><span></span><code><span class="k">def</span> <span class="nf">load_paths</span><span class="p">(</span><span class="n">path</span><span class="p">,</span> <span class="n">split_ratio</span><span class="p">):</span> <span class="n">images</span> <span class="o">=</span> <span class="nb">sorted</span><span class="p">(</span><span class="n">glob</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">path</span><span class="p">,</span> <span class="s2">&quot;DUTS-TE-Image/*&quot;</span><span class="p">)))[:</span><span class="mi">140</span><span class="p">]</span> <span class="n">masks</span> <span class="o">=</span> <span class="nb">sorted</span><span class="p">(</span><span class="n">glob</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">path</span><span class="p">,</span> <span class="s2">&quot;DUTS-TE-Mask/*&quot;</span><span class="p">)))[:</span><span class="mi">140</span><span class="p">]</span> <span class="n">len_</span> <span class="o">=</span> <span class="nb">int</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">images</span><span class="p">)</span> <span class="o">*</span> <span class="n">split_ratio</span><span class="p">)</span> <span class="k">return</span> <span class="p">(</span><span class="n">images</span><span class="p">[:</span><span class="n">len_</span><span class="p">],</span> <span class="n">masks</span><span class="p">[:</span><span class="n">len_</span><span class="p">]),</span> <span class="p">(</span><span class="n">images</span><span class="p">[</span><span class="n">len_</span><span class="p">:],</span> <span class="n">masks</span><span class="p">[</span><span class="n">len_</span><span class="p">:])</span> <span class="k">def</span> <span class="nf">read_image</span><span class="p">(</span><span class="n">path</span><span class="p">,</span> <span class="n">size</span><span class="p">,</span> <span class="n">mode</span><span class="p">):</span> <span class="n">x</span> <span class="o">=</span> <span class="n">keras</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">load_img</span><span class="p">(</span><span class="n">path</span><span class="p">,</span> <span class="n">target_size</span><span class="o">=</span><span class="n">size</span><span class="p">,</span> <span class="n">color_mode</span><span class="o">=</span><span class="n">mode</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">keras</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">img_to_array</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="p">(</span><span class="n">x</span> <span class="o">/</span> <span class="mf">255.0</span><span class="p">)</span><span class="o">.</span><span class="n">astype</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span> <span class="k">return</span> <span class="n">x</span> <span class="k">def</span> <span class="nf">preprocess</span><span class="p">(</span><span class="n">x_batch</span><span class="p">,</span> <span class="n">y_batch</span><span class="p">,</span> <span class="n">img_size</span><span class="p">,</span> <span class="n">out_classes</span><span class="p">):</span> <span class="k">def</span> <span class="nf">f</span><span class="p">(</span><span class="n">_x</span><span class="p">,</span> <span class="n">_y</span><span class="p">):</span> <span class="n">_x</span><span class="p">,</span> <span class="n">_y</span> <span class="o">=</span> <span class="n">_x</span><span class="o">.</span><span class="n">decode</span><span class="p">(),</span> <span class="n">_y</span><span class="o">.</span><span class="n">decode</span><span class="p">()</span> <span class="n">_x</span> <span class="o">=</span> <span class="n">read_image</span><span class="p">(</span><span class="n">_x</span><span class="p">,</span> <span class="p">(</span><span class="n">img_size</span><span class="p">,</span> <span class="n">img_size</span><span class="p">),</span> <span class="n">mode</span><span class="o">=</span><span class="s2">&quot;rgb&quot;</span><span class="p">)</span> <span class="c1"># image</span> <span class="n">_y</span> <span class="o">=</span> <span class="n">read_image</span><span class="p">(</span><span class="n">_y</span><span class="p">,</span> <span class="p">(</span><span class="n">img_size</span><span class="p">,</span> <span class="n">img_size</span><span class="p">),</span> <span class="n">mode</span><span class="o">=</span><span class="s2">&quot;grayscale&quot;</span><span class="p">)</span> <span class="c1"># mask</span> <span class="k">return</span> <span class="n">_x</span><span class="p">,</span> <span class="n">_y</span> <span class="n">images</span><span class="p">,</span> <span class="n">masks</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">numpy_function</span><span class="p">(</span><span class="n">f</span><span class="p">,</span> <span class="p">[</span><span class="n">x_batch</span><span class="p">,</span> <span class="n">y_batch</span><span class="p">],</span> <span class="p">[</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">])</span> <span class="n">images</span><span class="o">.</span><span class="n">set_shape</span><span class="p">([</span><span class="n">img_size</span><span class="p">,</span> <span class="n">img_size</span><span class="p">,</span> <span class="mi">3</span><span class="p">])</span> <span class="n">masks</span><span class="o">.</span><span class="n">set_shape</span><span class="p">([</span><span class="n">img_size</span><span class="p">,</span> <span class="n">img_size</span><span class="p">,</span> <span class="n">out_classes</span><span class="p">])</span> <span class="k">return</span> <span class="n">images</span><span class="p">,</span> <span class="n">masks</span> <span class="k">def</span> <span class="nf">load_dataset</span><span class="p">(</span><span class="n">image_paths</span><span class="p">,</span> <span class="n">mask_paths</span><span class="p">,</span> <span class="n">img_size</span><span class="p">,</span> <span class="n">out_classes</span><span class="p">,</span> <span class="n">batch</span><span class="p">,</span> <span class="n">shuffle</span><span class="o">=</span><span class="kc">True</span><span class="p">):</span> <span class="n">dataset</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">data</span><span class="o">.</span><span class="n">Dataset</span><span class="o">.</span><span class="n">from_tensor_slices</span><span class="p">((</span><span class="n">image_paths</span><span class="p">,</span> <span class="n">mask_paths</span><span class="p">))</span> <span class="k">if</span> <span class="n">shuffle</span><span class="p">:</span> <span class="n">dataset</span> <span class="o">=</span> <span class="n">dataset</span><span class="o">.</span><span class="n">cache</span><span class="p">()</span><span class="o">.</span><span class="n">shuffle</span><span class="p">(</span><span class="n">buffer_size</span><span class="o">=</span><span class="mi">1000</span><span class="p">)</span> <span class="n">dataset</span> <span class="o">=</span> <span class="n">dataset</span><span class="o">.</span><span class="n">map</span><span class="p">(</span> <span class="k">lambda</span> <span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">:</span> <span class="n">preprocess</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">img_size</span><span class="p">,</span> <span class="n">out_classes</span><span class="p">),</span> <span class="n">num_parallel_calls</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">data</span><span class="o">.</span><span class="n">AUTOTUNE</span><span class="p">,</span> <span class="p">)</span> <span class="n">dataset</span> <span class="o">=</span> <span class="n">dataset</span><span class="o">.</span><span class="n">batch</span><span class="p">(</span><span class="n">batch</span><span class="p">)</span> <span class="n">dataset</span> <span class="o">=</span> <span class="n">dataset</span><span class="o">.</span><span class="n">prefetch</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">data</span><span class="o">.</span><span class="n">AUTOTUNE</span><span class="p">)</span> <span class="k">return</span> <span class="n">dataset</span> <span class="n">train_paths</span><span class="p">,</span> <span class="n">val_paths</span> <span class="o">=</span> <span class="n">load_paths</span><span class="p">(</span><span class="n">DATA_DIR</span><span class="p">,</span> <span class="n">TRAIN_SPLIT_RATIO</span><span class="p">)</span> <span class="n">train_dataset</span> <span class="o">=</span> <span class="n">load_dataset</span><span class="p">(</span> <span class="n">train_paths</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">train_paths</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">IMAGE_SIZE</span><span class="p">,</span> <span class="n">OUT_CLASSES</span><span class="p">,</span> <span class="n">BATCH_SIZE</span><span class="p">,</span> <span class="n">shuffle</span><span class="o">=</span><span class="kc">True</span> <span class="p">)</span> <span class="n">val_dataset</span> <span class="o">=</span> <span class="n">load_dataset</span><span class="p">(</span> <span class="n">val_paths</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">val_paths</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">IMAGE_SIZE</span><span class="p">,</span> <span class="n">OUT_CLASSES</span><span class="p">,</span> <span class="n">BATCH_SIZE</span><span class="p">,</span> <span class="n">shuffle</span><span class="o">=</span><span class="kc">False</span> <span class="p">)</span> <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Train Dataset: </span><span class="si">{</span><span class="n">train_dataset</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span> <span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Validation Dataset: </span><span class="si">{</span><span class="n">val_dataset</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span> </code></pre></div> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code>Train Dataset: &lt;_PrefetchDataset element_spec=(TensorSpec(shape=(None, 288, 288, 3), dtype=tf.float32, name=None), TensorSpec(shape=(None, 288, 288, 1), dtype=tf.float32, name=None))&gt; Validation Dataset: &lt;_PrefetchDataset element_spec=(TensorSpec(shape=(None, 288, 288, 3), dtype=tf.float32, name=None), TensorSpec(shape=(None, 288, 288, 1), dtype=tf.float32, name=None))&gt; </code></pre></div> </div> <hr /> <h2 id="visualize-data">Visualize Data</h2> <div class="codehilite"><pre><span></span><code><span class="k">def</span> <span class="nf">display</span><span class="p">(</span><span class="n">display_list</span><span class="p">):</span> <span class="n">title</span> <span class="o">=</span> <span class="p">[</span><span class="s2">&quot;Input Image&quot;</span><span class="p">,</span> <span class="s2">&quot;True Mask&quot;</span><span class="p">,</span> <span class="s2">&quot;Predicted Mask&quot;</span><span class="p">]</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">display_list</span><span class="p">)):</span> <span class="n">plt</span><span class="o">.</span><span class="n">subplot</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="nb">len</span><span class="p">(</span><span class="n">display_list</span><span class="p">),</span> <span class="n">i</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="n">title</span><span class="p">[</span><span class="n">i</span><span class="p">])</span> <span class="n">plt</span><span class="o">.</span><span class="n">imshow</span><span class="p">(</span><span class="n">keras</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">array_to_img</span><span class="p">(</span><span class="n">display_list</span><span class="p">[</span><span class="n">i</span><span class="p">]),</span> <span class="n">cmap</span><span class="o">=</span><span class="s2">&quot;gray&quot;</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">axis</span><span class="p">(</span><span class="s2">&quot;off&quot;</span><span class="p">)</span> <span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span> <span class="k">for</span> <span class="n">image</span><span class="p">,</span> <span class="n">mask</span> <span class="ow">in</span> <span class="n">val_dataset</span><span class="o">.</span><span class="n">take</span><span class="p">(</span><span class="mi">1</span><span class="p">):</span> <span class="n">display</span><span class="p">([</span><span class="n">image</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">mask</span><span class="p">[</span><span class="mi">0</span><span class="p">]])</span> </code></pre></div> <p><img alt="png" src="/img/examples/vision/basnet_segmentation/basnet_segmentation_10_0.png" /></p> <hr /> <h2 id="analyze-mask">Analyze Mask</h2> <p>Lets print unique values of above displayed mask. You can see despite belonging to one class, it's intensity is changing between low(0) to high(255). This variation in intensity makes it hard for network to generate good segmentation map for <strong>salient or camouflaged object segmentation</strong>. Because of its Residual Refined Module (RMs), BASNet is good in generating highly accurate boundaries and fine structures.</p> <div class="codehilite"><pre><span></span><code><span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;Unique values count: </span><span class="si">{</span><span class="nb">len</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">unique</span><span class="p">((</span><span class="n">mask</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="mi">255</span><span class="p">)))</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">)</span> <span class="nb">print</span><span class="p">(</span><span class="s2">&quot;Unique values:&quot;</span><span class="p">)</span> <span class="nb">print</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">unique</span><span class="p">((</span><span class="n">mask</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">*</span> <span class="mi">255</span><span class="p">))</span><span class="o">.</span><span class="n">astype</span><span class="p">(</span><span class="nb">int</span><span class="p">))</span> </code></pre></div> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code>Unique values count: 245 Unique values: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 61 62 63 65 66 67 68 69 70 71 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 108 109 110 111 112 113 114 115 116 117 118 119 120 122 123 124 125 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255] </code></pre></div> </div> <hr /> <h2 id="building-the-basnet-model">Building the BASNet Model</h2> <p>BASNet comprises of a predict-refine architecture and a hybrid loss. The predict-refine architecture consists of a densely supervised encoder-decoder network and a residual refinement module, which are respectively used to predict and refine a segmentation probability map.</p> <p><img alt="" src="https://i.imgur.com/8jaZ2qs.png" /></p> <div class="codehilite"><pre><span></span><code><span class="k">def</span> <span class="nf">basic_block</span><span class="p">(</span><span class="n">x_input</span><span class="p">,</span> <span class="n">filters</span><span class="p">,</span> <span class="n">stride</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">down_sample</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span> <span class="w"> </span><span class="sd">&quot;&quot;&quot;Creates a residual(identity) block with two 3*3 convolutions.&quot;&quot;&quot;</span> <span class="n">residual</span> <span class="o">=</span> <span class="n">x_input</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Conv2D</span><span class="p">(</span><span class="n">filters</span><span class="p">,</span> <span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">3</span><span class="p">),</span> <span class="n">strides</span><span class="o">=</span><span class="n">stride</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="s2">&quot;same&quot;</span><span class="p">,</span> <span class="n">use_bias</span><span class="o">=</span><span class="kc">False</span><span class="p">)(</span> <span class="n">x_input</span> <span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">BatchNormalization</span><span class="p">()(</span><span class="n">x</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Activation</span><span class="p">(</span><span class="s2">&quot;relu&quot;</span><span class="p">)(</span><span class="n">x</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Conv2D</span><span class="p">(</span><span class="n">filters</span><span class="p">,</span> <span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">3</span><span class="p">),</span> <span class="n">strides</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">padding</span><span class="o">=</span><span class="s2">&quot;same&quot;</span><span class="p">,</span> <span class="n">use_bias</span><span class="o">=</span><span class="kc">False</span><span class="p">)(</span> <span class="n">x</span> <span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">BatchNormalization</span><span class="p">()(</span><span class="n">x</span><span class="p">)</span> <span class="k">if</span> <span class="n">down_sample</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span> <span class="n">residual</span> <span class="o">=</span> <span class="n">down_sample</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Add</span><span class="p">()([</span><span class="n">x</span><span class="p">,</span> <span class="n">residual</span><span class="p">])</span> <span class="k">if</span> <span class="n">activation</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Activation</span><span class="p">(</span><span class="n">activation</span><span class="p">)(</span><span class="n">x</span><span class="p">)</span> <span class="k">return</span> <span class="n">x</span> <span class="k">def</span> <span class="nf">convolution_block</span><span class="p">(</span><span class="n">x_input</span><span class="p">,</span> <span class="n">filters</span><span class="p">,</span> <span class="n">dilation</span><span class="o">=</span><span class="mi">1</span><span class="p">):</span> <span class="w"> </span><span class="sd">&quot;&quot;&quot;Apply convolution + batch normalization + relu layer.&quot;&quot;&quot;</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Conv2D</span><span class="p">(</span><span class="n">filters</span><span class="p">,</span> <span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">3</span><span class="p">),</span> <span class="n">padding</span><span class="o">=</span><span class="s2">&quot;same&quot;</span><span class="p">,</span> <span class="n">dilation_rate</span><span class="o">=</span><span class="n">dilation</span><span class="p">)(</span><span class="n">x_input</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">BatchNormalization</span><span class="p">()(</span><span class="n">x</span><span class="p">)</span> <span class="k">return</span> <span class="n">layers</span><span class="o">.</span><span class="n">Activation</span><span class="p">(</span><span class="s2">&quot;relu&quot;</span><span class="p">)(</span><span class="n">x</span><span class="p">)</span> <span class="k">def</span> <span class="nf">segmentation_head</span><span class="p">(</span><span class="n">x_input</span><span class="p">,</span> <span class="n">out_classes</span><span class="p">,</span> <span class="n">final_size</span><span class="p">):</span> <span class="w"> </span><span class="sd">&quot;&quot;&quot;Map each decoder stage output to model output classes.&quot;&quot;&quot;</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Conv2D</span><span class="p">(</span><span class="n">out_classes</span><span class="p">,</span> <span class="n">kernel_size</span><span class="o">=</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">3</span><span class="p">),</span> <span class="n">padding</span><span class="o">=</span><span class="s2">&quot;same&quot;</span><span class="p">)(</span><span class="n">x_input</span><span class="p">)</span> <span class="k">if</span> <span class="n">final_size</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Resizing</span><span class="p">(</span><span class="n">final_size</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">final_size</span><span class="p">[</span><span class="mi">1</span><span class="p">])(</span><span class="n">x</span><span class="p">)</span> <span class="k">return</span> <span class="n">x</span> <span class="k">def</span> <span class="nf">get_resnet_block</span><span class="p">(</span><span class="n">_resnet</span><span class="p">,</span> <span class="n">block_num</span><span class="p">):</span> <span class="w"> </span><span class="sd">&quot;&quot;&quot;Extract and return ResNet-34 block.&quot;&quot;&quot;</span> <span class="n">resnet_layers</span> <span class="o">=</span> <span class="p">[</span><span class="mi">3</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">6</span><span class="p">,</span> <span class="mi">3</span><span class="p">]</span> <span class="c1"># ResNet-34 layer sizes at different block.</span> <span class="k">return</span> <span class="n">keras</span><span class="o">.</span><span class="n">models</span><span class="o">.</span><span class="n">Model</span><span class="p">(</span> <span class="n">inputs</span><span class="o">=</span><span class="n">_resnet</span><span class="o">.</span><span class="n">get_layer</span><span class="p">(</span><span class="sa">f</span><span class="s2">&quot;v2_stack_</span><span class="si">{</span><span class="n">block_num</span><span class="si">}</span><span class="s2">_block1_1_conv&quot;</span><span class="p">)</span><span class="o">.</span><span class="n">input</span><span class="p">,</span> <span class="n">outputs</span><span class="o">=</span><span class="n">_resnet</span><span class="o">.</span><span class="n">get_layer</span><span class="p">(</span> <span class="sa">f</span><span class="s2">&quot;v2_stack_</span><span class="si">{</span><span class="n">block_num</span><span class="si">}</span><span class="s2">_block</span><span class="si">{</span><span class="n">resnet_layers</span><span class="p">[</span><span class="n">block_num</span><span class="p">]</span><span class="si">}</span><span class="s2">_add&quot;</span> <span class="p">)</span><span class="o">.</span><span class="n">output</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="sa">f</span><span class="s2">&quot;resnet34_block</span><span class="si">{</span><span class="n">block_num</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="mi">1</span><span class="si">}</span><span class="s2">&quot;</span><span class="p">,</span> <span class="p">)</span> </code></pre></div> <hr /> <h2 id="prediction-module">Prediction Module</h2> <p>Prediction module is a heavy encoder decoder structure like U-Net. The encoder includes an input convolutional layer and six stages. First four are adopted from ResNet-34 and rest are basic res-blocks. Since first convolution and pooling layer of ResNet-34 is skipped so we will use <code>get_resnet_block()</code> to extract first four blocks. Both bridge and decoder uses three convolutional layers with side outputs. The module produces seven segmentation probability maps during training, with the last one considered the final output.</p> <div class="codehilite"><pre><span></span><code><span class="k">def</span> <span class="nf">basnet_predict</span><span class="p">(</span><span class="n">input_shape</span><span class="p">,</span> <span class="n">out_classes</span><span class="p">):</span> <span class="w"> </span><span class="sd">&quot;&quot;&quot;BASNet Prediction Module, it outputs coarse label map.&quot;&quot;&quot;</span> <span class="n">filters</span> <span class="o">=</span> <span class="mi">64</span> <span class="n">num_stages</span> <span class="o">=</span> <span class="mi">6</span> <span class="n">x_input</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Input</span><span class="p">(</span><span class="n">input_shape</span><span class="p">)</span> <span class="c1"># -------------Encoder--------------</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Conv2D</span><span class="p">(</span><span class="n">filters</span><span class="p">,</span> <span class="n">kernel_size</span><span class="o">=</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">3</span><span class="p">),</span> <span class="n">padding</span><span class="o">=</span><span class="s2">&quot;same&quot;</span><span class="p">)(</span><span class="n">x_input</span><span class="p">)</span> <span class="n">resnet</span> <span class="o">=</span> <span class="n">keras_cv</span><span class="o">.</span><span class="n">models</span><span class="o">.</span><span class="n">ResNet34Backbone</span><span class="p">(</span> <span class="n">include_rescaling</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="p">)</span> <span class="n">encoder_blocks</span> <span class="o">=</span> <span class="p">[]</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_stages</span><span class="p">):</span> <span class="k">if</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="mi">4</span><span class="p">:</span> <span class="c1"># First four stages are adopted from ResNet-34 blocks.</span> <span class="n">x</span> <span class="o">=</span> <span class="n">get_resnet_block</span><span class="p">(</span><span class="n">resnet</span><span class="p">,</span> <span class="n">i</span><span class="p">)(</span><span class="n">x</span><span class="p">)</span> <span class="n">encoder_blocks</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Activation</span><span class="p">(</span><span class="s2">&quot;relu&quot;</span><span class="p">)(</span><span class="n">x</span><span class="p">)</span> <span class="k">else</span><span class="p">:</span> <span class="c1"># Last 2 stages consist of three basic resnet blocks.</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">MaxPool2D</span><span class="p">(</span><span class="n">pool_size</span><span class="o">=</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">),</span> <span class="n">strides</span><span class="o">=</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">))(</span><span class="n">x</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">basic_block</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">filters</span><span class="o">=</span><span class="n">filters</span> <span class="o">*</span> <span class="mi">8</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s2">&quot;relu&quot;</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">basic_block</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">filters</span><span class="o">=</span><span class="n">filters</span> <span class="o">*</span> <span class="mi">8</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s2">&quot;relu&quot;</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">basic_block</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">filters</span><span class="o">=</span><span class="n">filters</span> <span class="o">*</span> <span class="mi">8</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s2">&quot;relu&quot;</span><span class="p">)</span> <span class="n">encoder_blocks</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="c1"># -------------Bridge-------------</span> <span class="n">x</span> <span class="o">=</span> <span class="n">convolution_block</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">filters</span><span class="o">=</span><span class="n">filters</span> <span class="o">*</span> <span class="mi">8</span><span class="p">,</span> <span class="n">dilation</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">convolution_block</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">filters</span><span class="o">=</span><span class="n">filters</span> <span class="o">*</span> <span class="mi">8</span><span class="p">,</span> <span class="n">dilation</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">convolution_block</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">filters</span><span class="o">=</span><span class="n">filters</span> <span class="o">*</span> <span class="mi">8</span><span class="p">,</span> <span class="n">dilation</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span> <span class="n">encoder_blocks</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="c1"># -------------Decoder-------------</span> <span class="n">decoder_blocks</span> <span class="o">=</span> <span class="p">[]</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">reversed</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="n">num_stages</span><span class="p">)):</span> <span class="k">if</span> <span class="n">i</span> <span class="o">!=</span> <span class="p">(</span><span class="n">num_stages</span> <span class="o">-</span> <span class="mi">1</span><span class="p">):</span> <span class="c1"># Except first, scale other decoder stages.</span> <span class="n">shape</span> <span class="o">=</span> <span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">int_shape</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Resizing</span><span class="p">(</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="o">*</span> <span class="mi">2</span><span class="p">,</span> <span class="n">shape</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> <span class="o">*</span> <span class="mi">2</span><span class="p">)(</span><span class="n">x</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">concatenate</span><span class="p">([</span><span class="n">encoder_blocks</span><span class="p">[</span><span class="n">i</span><span class="p">],</span> <span class="n">x</span><span class="p">],</span> <span class="n">axis</span><span class="o">=-</span><span class="mi">1</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">convolution_block</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">filters</span><span class="o">=</span><span class="n">filters</span> <span class="o">*</span> <span class="mi">8</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">convolution_block</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">filters</span><span class="o">=</span><span class="n">filters</span> <span class="o">*</span> <span class="mi">8</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">convolution_block</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">filters</span><span class="o">=</span><span class="n">filters</span> <span class="o">*</span> <span class="mi">8</span><span class="p">)</span> <span class="n">decoder_blocks</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="n">decoder_blocks</span><span class="o">.</span><span class="n">reverse</span><span class="p">()</span> <span class="c1"># Change order from last to first decoder stage.</span> <span class="n">decoder_blocks</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">encoder_blocks</span><span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">])</span> <span class="c1"># Copy bridge to decoder.</span> <span class="c1"># -------------Side Outputs--------------</span> <span class="n">decoder_blocks</span> <span class="o">=</span> <span class="p">[</span> <span class="n">segmentation_head</span><span class="p">(</span><span class="n">decoder_block</span><span class="p">,</span> <span class="n">out_classes</span><span class="p">,</span> <span class="n">input_shape</span><span class="p">[:</span><span class="mi">2</span><span class="p">])</span> <span class="k">for</span> <span class="n">decoder_block</span> <span class="ow">in</span> <span class="n">decoder_blocks</span> <span class="p">]</span> <span class="k">return</span> <span class="n">keras</span><span class="o">.</span><span class="n">models</span><span class="o">.</span><span class="n">Model</span><span class="p">(</span><span class="n">inputs</span><span class="o">=</span><span class="p">[</span><span class="n">x_input</span><span class="p">],</span> <span class="n">outputs</span><span class="o">=</span><span class="n">decoder_blocks</span><span class="p">)</span> </code></pre></div> <hr /> <h2 id="residual-refinement-module">Residual Refinement Module</h2> <p>Refinement Modules (RMs), designed as a residual block aim to refines the coarse(blurry and noisy boundaries) segmentation maps generated by prediction module. Similar to prediction module it's also an encode decoder structure but with light weight 4 stages, each containing one <code>convolutional block()</code> init. At the end it adds both coarse and residual output to generate refined output.</p> <div class="codehilite"><pre><span></span><code><span class="k">def</span> <span class="nf">basnet_rrm</span><span class="p">(</span><span class="n">base_model</span><span class="p">,</span> <span class="n">out_classes</span><span class="p">):</span> <span class="w"> </span><span class="sd">&quot;&quot;&quot;BASNet Residual Refinement Module(RRM) module, output fine label map.&quot;&quot;&quot;</span> <span class="n">num_stages</span> <span class="o">=</span> <span class="mi">4</span> <span class="n">filters</span> <span class="o">=</span> <span class="mi">64</span> <span class="n">x_input</span> <span class="o">=</span> <span class="n">base_model</span><span class="o">.</span><span class="n">output</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="c1"># -------------Encoder--------------</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Conv2D</span><span class="p">(</span><span class="n">filters</span><span class="p">,</span> <span class="n">kernel_size</span><span class="o">=</span><span class="p">(</span><span class="mi">3</span><span class="p">,</span> <span class="mi">3</span><span class="p">),</span> <span class="n">padding</span><span class="o">=</span><span class="s2">&quot;same&quot;</span><span class="p">)(</span><span class="n">x_input</span><span class="p">)</span> <span class="n">encoder_blocks</span> <span class="o">=</span> <span class="p">[]</span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_stages</span><span class="p">):</span> <span class="n">x</span> <span class="o">=</span> <span class="n">convolution_block</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">filters</span><span class="o">=</span><span class="n">filters</span><span class="p">)</span> <span class="n">encoder_blocks</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">MaxPool2D</span><span class="p">(</span><span class="n">pool_size</span><span class="o">=</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">),</span> <span class="n">strides</span><span class="o">=</span><span class="p">(</span><span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">))(</span><span class="n">x</span><span class="p">)</span> <span class="c1"># -------------Bridge--------------</span> <span class="n">x</span> <span class="o">=</span> <span class="n">convolution_block</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">filters</span><span class="o">=</span><span class="n">filters</span><span class="p">)</span> <span class="c1"># -------------Decoder--------------</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">reversed</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="n">num_stages</span><span class="p">)):</span> <span class="n">shape</span> <span class="o">=</span> <span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">int_shape</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Resizing</span><span class="p">(</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="o">*</span> <span class="mi">2</span><span class="p">,</span> <span class="n">shape</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> <span class="o">*</span> <span class="mi">2</span><span class="p">)(</span><span class="n">x</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">concatenate</span><span class="p">([</span><span class="n">encoder_blocks</span><span class="p">[</span><span class="n">i</span><span class="p">],</span> <span class="n">x</span><span class="p">],</span> <span class="n">axis</span><span class="o">=-</span><span class="mi">1</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">convolution_block</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">filters</span><span class="o">=</span><span class="n">filters</span><span class="p">)</span> <span class="n">x</span> <span class="o">=</span> <span class="n">segmentation_head</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">out_classes</span><span class="p">,</span> <span class="kc">None</span><span class="p">)</span> <span class="c1"># Segmentation head.</span> <span class="c1"># ------------- refined = coarse + residual</span> <span class="n">x</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Add</span><span class="p">()([</span><span class="n">x_input</span><span class="p">,</span> <span class="n">x</span><span class="p">])</span> <span class="c1"># Add prediction + refinement output</span> <span class="k">return</span> <span class="n">keras</span><span class="o">.</span><span class="n">models</span><span class="o">.</span><span class="n">Model</span><span class="p">(</span><span class="n">inputs</span><span class="o">=</span><span class="p">[</span><span class="n">base_model</span><span class="o">.</span><span class="n">input</span><span class="p">],</span> <span class="n">outputs</span><span class="o">=</span><span class="p">[</span><span class="n">x</span><span class="p">])</span> </code></pre></div> <hr /> <h2 id="combine-predict-and-refinement-module">Combine Predict and Refinement Module</h2> <div class="codehilite"><pre><span></span><code><span class="k">def</span> <span class="nf">basnet</span><span class="p">(</span><span class="n">input_shape</span><span class="p">,</span> <span class="n">out_classes</span><span class="p">):</span> <span class="w"> </span><span class="sd">&quot;&quot;&quot;BASNet, it&#39;s a combination of two modules</span> <span class="sd"> Prediction Module and Residual Refinement Module(RRM).&quot;&quot;&quot;</span> <span class="c1"># Prediction model.</span> <span class="n">predict_model</span> <span class="o">=</span> <span class="n">basnet_predict</span><span class="p">(</span><span class="n">input_shape</span><span class="p">,</span> <span class="n">out_classes</span><span class="p">)</span> <span class="c1"># Refinement model.</span> <span class="n">refine_model</span> <span class="o">=</span> <span class="n">basnet_rrm</span><span class="p">(</span><span class="n">predict_model</span><span class="p">,</span> <span class="n">out_classes</span><span class="p">)</span> <span class="n">output</span> <span class="o">=</span> <span class="p">[</span><span class="n">refine_model</span><span class="o">.</span><span class="n">output</span><span class="p">]</span> <span class="c1"># Combine outputs.</span> <span class="n">output</span><span class="o">.</span><span class="n">extend</span><span class="p">(</span><span class="n">predict_model</span><span class="o">.</span><span class="n">output</span><span class="p">)</span> <span class="n">output</span> <span class="o">=</span> <span class="p">[</span><span class="n">layers</span><span class="o">.</span><span class="n">Activation</span><span class="p">(</span><span class="s2">&quot;sigmoid&quot;</span><span class="p">)(</span><span class="n">_</span><span class="p">)</span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="n">output</span><span class="p">]</span> <span class="c1"># Activations.</span> <span class="k">return</span> <span class="n">keras</span><span class="o">.</span><span class="n">models</span><span class="o">.</span><span class="n">Model</span><span class="p">(</span><span class="n">inputs</span><span class="o">=</span><span class="p">[</span><span class="n">predict_model</span><span class="o">.</span><span class="n">input</span><span class="p">],</span> <span class="n">outputs</span><span class="o">=</span><span class="n">output</span><span class="p">)</span> </code></pre></div> <hr /> <h2 id="hybrid-loss">Hybrid Loss</h2> <p>Another important feature of BASNet is its hybrid loss function, which is a combination of binary cross entropy, structural similarity and intersection-over-union losses, which guide the network to learn three-level (i.e., pixel, patch and map level) hierarchy representations.</p> <div class="codehilite"><pre><span></span><code><span class="k">class</span> <span class="nc">BasnetLoss</span><span class="p">(</span><span class="n">keras</span><span class="o">.</span><span class="n">losses</span><span class="o">.</span><span class="n">Loss</span><span class="p">):</span> <span class="w"> </span><span class="sd">&quot;&quot;&quot;BASNet hybrid loss.&quot;&quot;&quot;</span> <span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span> <span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;basnet_loss&quot;</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span> <span class="bp">self</span><span class="o">.</span><span class="n">smooth</span> <span class="o">=</span> <span class="mf">1.0e-9</span> <span class="c1"># Binary Cross Entropy loss.</span> <span class="bp">self</span><span class="o">.</span><span class="n">cross_entropy_loss</span> <span class="o">=</span> <span class="n">keras</span><span class="o">.</span><span class="n">losses</span><span class="o">.</span><span class="n">BinaryCrossentropy</span><span class="p">()</span> <span class="c1"># Structural Similarity Index value.</span> <span class="bp">self</span><span class="o">.</span><span class="n">ssim_value</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">image</span><span class="o">.</span><span class="n">ssim</span> <span class="c1"># Jaccard / IoU loss.</span> <span class="bp">self</span><span class="o">.</span><span class="n">iou_value</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">calculate_iou</span> <span class="k">def</span> <span class="nf">calculate_iou</span><span class="p">(</span> <span class="bp">self</span><span class="p">,</span> <span class="n">y_true</span><span class="p">,</span> <span class="n">y_pred</span><span class="p">,</span> <span class="p">):</span> <span class="w"> </span><span class="sd">&quot;&quot;&quot;Calculate intersection over union (IoU) between images.&quot;&quot;&quot;</span> <span class="n">intersection</span> <span class="o">=</span> <span class="n">backend</span><span class="o">.</span><span class="n">sum</span><span class="p">(</span><span class="n">backend</span><span class="o">.</span><span class="n">abs</span><span class="p">(</span><span class="n">y_true</span> <span class="o">*</span> <span class="n">y_pred</span><span class="p">),</span> <span class="n">axis</span><span class="o">=</span><span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">])</span> <span class="n">union</span> <span class="o">=</span> <span class="n">backend</span><span class="o">.</span><span class="n">sum</span><span class="p">(</span><span class="n">y_true</span><span class="p">,</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">])</span> <span class="o">+</span> <span class="n">backend</span><span class="o">.</span><span class="n">sum</span><span class="p">(</span><span class="n">y_pred</span><span class="p">,</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">])</span> <span class="n">union</span> <span class="o">=</span> <span class="n">union</span> <span class="o">-</span> <span class="n">intersection</span> <span class="k">return</span> <span class="n">backend</span><span class="o">.</span><span class="n">mean</span><span class="p">(</span> <span class="p">(</span><span class="n">intersection</span> <span class="o">+</span> <span class="bp">self</span><span class="o">.</span><span class="n">smooth</span><span class="p">)</span> <span class="o">/</span> <span class="p">(</span><span class="n">union</span> <span class="o">+</span> <span class="bp">self</span><span class="o">.</span><span class="n">smooth</span><span class="p">),</span> <span class="n">axis</span><span class="o">=</span><span class="mi">0</span> <span class="p">)</span> <span class="k">def</span> <span class="nf">call</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">y_true</span><span class="p">,</span> <span class="n">y_pred</span><span class="p">):</span> <span class="n">cross_entropy_loss</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">cross_entropy_loss</span><span class="p">(</span><span class="n">y_true</span><span class="p">,</span> <span class="n">y_pred</span><span class="p">)</span> <span class="n">ssim_value</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">ssim_value</span><span class="p">(</span><span class="n">y_true</span><span class="p">,</span> <span class="n">y_pred</span><span class="p">,</span> <span class="n">max_val</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> <span class="n">ssim_loss</span> <span class="o">=</span> <span class="n">backend</span><span class="o">.</span><span class="n">mean</span><span class="p">(</span><span class="mi">1</span> <span class="o">-</span> <span class="n">ssim_value</span> <span class="o">+</span> <span class="bp">self</span><span class="o">.</span><span class="n">smooth</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span> <span class="n">iou_value</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">iou_value</span><span class="p">(</span><span class="n">y_true</span><span class="p">,</span> <span class="n">y_pred</span><span class="p">)</span> <span class="n">iou_loss</span> <span class="o">=</span> <span class="mi">1</span> <span class="o">-</span> <span class="n">iou_value</span> <span class="c1"># Add all three losses.</span> <span class="k">return</span> <span class="n">cross_entropy_loss</span> <span class="o">+</span> <span class="n">ssim_loss</span> <span class="o">+</span> <span class="n">iou_loss</span> <span class="n">basnet_model</span> <span class="o">=</span> <span class="n">basnet</span><span class="p">(</span> <span class="n">input_shape</span><span class="o">=</span><span class="p">[</span><span class="n">IMAGE_SIZE</span><span class="p">,</span> <span class="n">IMAGE_SIZE</span><span class="p">,</span> <span class="mi">3</span><span class="p">],</span> <span class="n">out_classes</span><span class="o">=</span><span class="n">OUT_CLASSES</span> <span class="p">)</span> <span class="c1"># Create model.</span> <span class="n">basnet_model</span><span class="o">.</span><span class="n">summary</span><span class="p">()</span> <span class="c1"># Show model summary.</span> <span class="n">optimizer</span> <span class="o">=</span> <span class="n">keras</span><span class="o">.</span><span class="n">optimizers</span><span class="o">.</span><span class="n">Adam</span><span class="p">(</span><span class="n">learning_rate</span><span class="o">=</span><span class="mf">1e-4</span><span class="p">,</span> <span class="n">epsilon</span><span class="o">=</span><span class="mf">1e-8</span><span class="p">)</span> <span class="c1"># Compile model.</span> <span class="n">basnet_model</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span> <span class="n">loss</span><span class="o">=</span><span class="n">BasnetLoss</span><span class="p">(),</span> <span class="n">optimizer</span><span class="o">=</span><span class="n">optimizer</span><span class="p">,</span> <span class="n">metrics</span><span class="o">=</span><span class="p">[</span><span class="n">keras</span><span class="o">.</span><span class="n">metrics</span><span class="o">.</span><span class="n">MeanAbsoluteError</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s2">&quot;mae&quot;</span><span class="p">)],</span> <span class="p">)</span> </code></pre></div> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code>Model: &quot;model_2&quot; __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 288, 288, 3)] 0 [] conv2d (Conv2D) (None, 288, 288, 64) 1792 [&#39;input_1[0][0]&#39;] resnet34_block1 (Functiona (None, None, None, 64) 222720 [&#39;conv2d[0][0]&#39;] l) activation (Activation) (None, 288, 288, 64) 0 [&#39;resnet34_block1[0][0]&#39;] resnet34_block2 (Functiona (None, None, None, 128) 1118720 [&#39;activation[0][0]&#39;] l) activation_1 (Activation) (None, 144, 144, 128) 0 [&#39;resnet34_block2[0][0]&#39;] resnet34_block3 (Functiona (None, None, None, 256) 6829056 [&#39;activation_1[0][0]&#39;] l) activation_2 (Activation) (None, 72, 72, 256) 0 [&#39;resnet34_block3[0][0]&#39;] resnet34_block4 (Functiona (None, None, None, 512) 1312153 [&#39;activation_2[0][0]&#39;] l) 6 activation_3 (Activation) (None, 36, 36, 512) 0 [&#39;resnet34_block4[0][0]&#39;] max_pooling2d (MaxPooling2 (None, 18, 18, 512) 0 [&#39;activation_3[0][0]&#39;] D) conv2d_1 (Conv2D) (None, 18, 18, 512) 2359296 [&#39;max_pooling2d[0][0]&#39;] batch_normalization (Batch (None, 18, 18, 512) 2048 [&#39;conv2d_1[0][0]&#39;] Normalization) activation_4 (Activation) (None, 18, 18, 512) 0 [&#39;batch_normalization[0][0]&#39;] conv2d_2 (Conv2D) (None, 18, 18, 512) 2359296 [&#39;activation_4[0][0]&#39;] batch_normalization_1 (Bat (None, 18, 18, 512) 2048 [&#39;conv2d_2[0][0]&#39;] chNormalization) add (Add) (None, 18, 18, 512) 0 [&#39;batch_normalization_1[0][0]&#39; , &#39;max_pooling2d[0][0]&#39;] activation_5 (Activation) (None, 18, 18, 512) 0 [&#39;add[0][0]&#39;] conv2d_3 (Conv2D) (None, 18, 18, 512) 2359296 [&#39;activation_5[0][0]&#39;] batch_normalization_2 (Bat (None, 18, 18, 512) 2048 [&#39;conv2d_3[0][0]&#39;] chNormalization) activation_6 (Activation) (None, 18, 18, 512) 0 [&#39;batch_normalization_2[0][0]&#39; ] conv2d_4 (Conv2D) (None, 18, 18, 512) 2359296 [&#39;activation_6[0][0]&#39;] batch_normalization_3 (Bat (None, 18, 18, 512) 2048 [&#39;conv2d_4[0][0]&#39;] chNormalization) add_1 (Add) (None, 18, 18, 512) 0 [&#39;batch_normalization_3[0][0]&#39; , &#39;activation_5[0][0]&#39;] activation_7 (Activation) (None, 18, 18, 512) 0 [&#39;add_1[0][0]&#39;] conv2d_5 (Conv2D) (None, 18, 18, 512) 2359296 [&#39;activation_7[0][0]&#39;] batch_normalization_4 (Bat (None, 18, 18, 512) 2048 [&#39;conv2d_5[0][0]&#39;] chNormalization) activation_8 (Activation) (None, 18, 18, 512) 0 [&#39;batch_normalization_4[0][0]&#39; ] conv2d_6 (Conv2D) (None, 18, 18, 512) 2359296 [&#39;activation_8[0][0]&#39;] batch_normalization_5 (Bat (None, 18, 18, 512) 2048 [&#39;conv2d_6[0][0]&#39;] chNormalization) add_2 (Add) (None, 18, 18, 512) 0 [&#39;batch_normalization_5[0][0]&#39; , &#39;activation_7[0][0]&#39;] activation_9 (Activation) (None, 18, 18, 512) 0 [&#39;add_2[0][0]&#39;] max_pooling2d_1 (MaxPoolin (None, 9, 9, 512) 0 [&#39;activation_9[0][0]&#39;] g2D) conv2d_7 (Conv2D) (None, 9, 9, 512) 2359296 [&#39;max_pooling2d_1[0][0]&#39;] batch_normalization_6 (Bat (None, 9, 9, 512) 2048 [&#39;conv2d_7[0][0]&#39;] chNormalization) activation_10 (Activation) (None, 9, 9, 512) 0 [&#39;batch_normalization_6[0][0]&#39; ] conv2d_8 (Conv2D) (None, 9, 9, 512) 2359296 [&#39;activation_10[0][0]&#39;] batch_normalization_7 (Bat (None, 9, 9, 512) 2048 [&#39;conv2d_8[0][0]&#39;] chNormalization) add_3 (Add) (None, 9, 9, 512) 0 [&#39;batch_normalization_7[0][0]&#39; , &#39;max_pooling2d_1[0][0]&#39;] activation_11 (Activation) (None, 9, 9, 512) 0 [&#39;add_3[0][0]&#39;] conv2d_9 (Conv2D) (None, 9, 9, 512) 2359296 [&#39;activation_11[0][0]&#39;] batch_normalization_8 (Bat (None, 9, 9, 512) 2048 [&#39;conv2d_9[0][0]&#39;] chNormalization) activation_12 (Activation) (None, 9, 9, 512) 0 [&#39;batch_normalization_8[0][0]&#39; ] conv2d_10 (Conv2D) (None, 9, 9, 512) 2359296 [&#39;activation_12[0][0]&#39;] batch_normalization_9 (Bat (None, 9, 9, 512) 2048 [&#39;conv2d_10[0][0]&#39;] chNormalization) add_4 (Add) (None, 9, 9, 512) 0 [&#39;batch_normalization_9[0][0]&#39; , &#39;activation_11[0][0]&#39;] activation_13 (Activation) (None, 9, 9, 512) 0 [&#39;add_4[0][0]&#39;] conv2d_11 (Conv2D) (None, 9, 9, 512) 2359296 [&#39;activation_13[0][0]&#39;] batch_normalization_10 (Ba (None, 9, 9, 512) 2048 [&#39;conv2d_11[0][0]&#39;] tchNormalization) activation_14 (Activation) (None, 9, 9, 512) 0 [&#39;batch_normalization_10[0][0] &#39;] conv2d_12 (Conv2D) (None, 9, 9, 512) 2359296 [&#39;activation_14[0][0]&#39;] batch_normalization_11 (Ba (None, 9, 9, 512) 2048 [&#39;conv2d_12[0][0]&#39;] tchNormalization) add_5 (Add) (None, 9, 9, 512) 0 [&#39;batch_normalization_11[0][0] &#39;, &#39;activation_13[0][0]&#39;] activation_15 (Activation) (None, 9, 9, 512) 0 [&#39;add_5[0][0]&#39;] conv2d_13 (Conv2D) (None, 9, 9, 512) 2359808 [&#39;activation_15[0][0]&#39;] batch_normalization_12 (Ba (None, 9, 9, 512) 2048 [&#39;conv2d_13[0][0]&#39;] tchNormalization) activation_16 (Activation) (None, 9, 9, 512) 0 [&#39;batch_normalization_12[0][0] &#39;] conv2d_14 (Conv2D) (None, 9, 9, 512) 2359808 [&#39;activation_16[0][0]&#39;] batch_normalization_13 (Ba (None, 9, 9, 512) 2048 [&#39;conv2d_14[0][0]&#39;] tchNormalization) activation_17 (Activation) (None, 9, 9, 512) 0 [&#39;batch_normalization_13[0][0] &#39;] conv2d_15 (Conv2D) (None, 9, 9, 512) 2359808 [&#39;activation_17[0][0]&#39;] batch_normalization_14 (Ba (None, 9, 9, 512) 2048 [&#39;conv2d_15[0][0]&#39;] tchNormalization) activation_18 (Activation) (None, 9, 9, 512) 0 [&#39;batch_normalization_14[0][0] &#39;] concatenate (Concatenate) (None, 9, 9, 1024) 0 [&#39;activation_15[0][0]&#39;, &#39;activation_18[0][0]&#39;] conv2d_16 (Conv2D) (None, 9, 9, 512) 4719104 [&#39;concatenate[0][0]&#39;] batch_normalization_15 (Ba (None, 9, 9, 512) 2048 [&#39;conv2d_16[0][0]&#39;] tchNormalization) activation_19 (Activation) (None, 9, 9, 512) 0 [&#39;batch_normalization_15[0][0] &#39;] conv2d_17 (Conv2D) (None, 9, 9, 512) 2359808 [&#39;activation_19[0][0]&#39;] batch_normalization_16 (Ba (None, 9, 9, 512) 2048 [&#39;conv2d_17[0][0]&#39;] tchNormalization) activation_20 (Activation) (None, 9, 9, 512) 0 [&#39;batch_normalization_16[0][0] &#39;] conv2d_18 (Conv2D) (None, 9, 9, 512) 2359808 [&#39;activation_20[0][0]&#39;] batch_normalization_17 (Ba (None, 9, 9, 512) 2048 [&#39;conv2d_18[0][0]&#39;] tchNormalization) activation_21 (Activation) (None, 9, 9, 512) 0 [&#39;batch_normalization_17[0][0] &#39;] resizing (Resizing) (None, 18, 18, 512) 0 [&#39;activation_21[0][0]&#39;] concatenate_1 (Concatenate (None, 18, 18, 1024) 0 [&#39;activation_9[0][0]&#39;, ) &#39;resizing[0][0]&#39;] conv2d_19 (Conv2D) (None, 18, 18, 512) 4719104 [&#39;concatenate_1[0][0]&#39;] batch_normalization_18 (Ba (None, 18, 18, 512) 2048 [&#39;conv2d_19[0][0]&#39;] tchNormalization) activation_22 (Activation) (None, 18, 18, 512) 0 [&#39;batch_normalization_18[0][0] &#39;] conv2d_20 (Conv2D) (None, 18, 18, 512) 2359808 [&#39;activation_22[0][0]&#39;] batch_normalization_19 (Ba (None, 18, 18, 512) 2048 [&#39;conv2d_20[0][0]&#39;] tchNormalization) activation_23 (Activation) (None, 18, 18, 512) 0 [&#39;batch_normalization_19[0][0] &#39;] conv2d_21 (Conv2D) (None, 18, 18, 512) 2359808 [&#39;activation_23[0][0]&#39;] batch_normalization_20 (Ba (None, 18, 18, 512) 2048 [&#39;conv2d_21[0][0]&#39;] tchNormalization) activation_24 (Activation) (None, 18, 18, 512) 0 [&#39;batch_normalization_20[0][0] &#39;] resizing_1 (Resizing) (None, 36, 36, 512) 0 [&#39;activation_24[0][0]&#39;] concatenate_2 (Concatenate (None, 36, 36, 1024) 0 [&#39;resnet34_block4[0][0]&#39;, ) &#39;resizing_1[0][0]&#39;] conv2d_22 (Conv2D) (None, 36, 36, 512) 4719104 [&#39;concatenate_2[0][0]&#39;] batch_normalization_21 (Ba (None, 36, 36, 512) 2048 [&#39;conv2d_22[0][0]&#39;] tchNormalization) activation_25 (Activation) (None, 36, 36, 512) 0 [&#39;batch_normalization_21[0][0] &#39;] conv2d_23 (Conv2D) (None, 36, 36, 512) 2359808 [&#39;activation_25[0][0]&#39;] batch_normalization_22 (Ba (None, 36, 36, 512) 2048 [&#39;conv2d_23[0][0]&#39;] tchNormalization) activation_26 (Activation) (None, 36, 36, 512) 0 [&#39;batch_normalization_22[0][0] &#39;] conv2d_24 (Conv2D) (None, 36, 36, 512) 2359808 [&#39;activation_26[0][0]&#39;] batch_normalization_23 (Ba (None, 36, 36, 512) 2048 [&#39;conv2d_24[0][0]&#39;] tchNormalization) activation_27 (Activation) (None, 36, 36, 512) 0 [&#39;batch_normalization_23[0][0] &#39;] resizing_2 (Resizing) (None, 72, 72, 512) 0 [&#39;activation_27[0][0]&#39;] concatenate_3 (Concatenate (None, 72, 72, 768) 0 [&#39;resnet34_block3[0][0]&#39;, ) &#39;resizing_2[0][0]&#39;] conv2d_25 (Conv2D) (None, 72, 72, 512) 3539456 [&#39;concatenate_3[0][0]&#39;] batch_normalization_24 (Ba (None, 72, 72, 512) 2048 [&#39;conv2d_25[0][0]&#39;] tchNormalization) activation_28 (Activation) (None, 72, 72, 512) 0 [&#39;batch_normalization_24[0][0] &#39;] conv2d_26 (Conv2D) (None, 72, 72, 512) 2359808 [&#39;activation_28[0][0]&#39;] batch_normalization_25 (Ba (None, 72, 72, 512) 2048 [&#39;conv2d_26[0][0]&#39;] tchNormalization) activation_29 (Activation) (None, 72, 72, 512) 0 [&#39;batch_normalization_25[0][0] &#39;] conv2d_27 (Conv2D) (None, 72, 72, 512) 2359808 [&#39;activation_29[0][0]&#39;] batch_normalization_26 (Ba (None, 72, 72, 512) 2048 [&#39;conv2d_27[0][0]&#39;] tchNormalization) activation_30 (Activation) (None, 72, 72, 512) 0 [&#39;batch_normalization_26[0][0] &#39;] resizing_3 (Resizing) (None, 144, 144, 512) 0 [&#39;activation_30[0][0]&#39;] concatenate_4 (Concatenate (None, 144, 144, 640) 0 [&#39;resnet34_block2[0][0]&#39;, ) &#39;resizing_3[0][0]&#39;] conv2d_28 (Conv2D) (None, 144, 144, 512) 2949632 [&#39;concatenate_4[0][0]&#39;] batch_normalization_27 (Ba (None, 144, 144, 512) 2048 [&#39;conv2d_28[0][0]&#39;] tchNormalization) activation_31 (Activation) (None, 144, 144, 512) 0 [&#39;batch_normalization_27[0][0] &#39;] conv2d_29 (Conv2D) (None, 144, 144, 512) 2359808 [&#39;activation_31[0][0]&#39;] batch_normalization_28 (Ba (None, 144, 144, 512) 2048 [&#39;conv2d_29[0][0]&#39;] tchNormalization) activation_32 (Activation) (None, 144, 144, 512) 0 [&#39;batch_normalization_28[0][0] &#39;] conv2d_30 (Conv2D) (None, 144, 144, 512) 2359808 [&#39;activation_32[0][0]&#39;] batch_normalization_29 (Ba (None, 144, 144, 512) 2048 [&#39;conv2d_30[0][0]&#39;] tchNormalization) activation_33 (Activation) (None, 144, 144, 512) 0 [&#39;batch_normalization_29[0][0] &#39;] resizing_4 (Resizing) (None, 288, 288, 512) 0 [&#39;activation_33[0][0]&#39;] concatenate_5 (Concatenate (None, 288, 288, 576) 0 [&#39;resnet34_block1[0][0]&#39;, ) &#39;resizing_4[0][0]&#39;] conv2d_31 (Conv2D) (None, 288, 288, 512) 2654720 [&#39;concatenate_5[0][0]&#39;] batch_normalization_30 (Ba (None, 288, 288, 512) 2048 [&#39;conv2d_31[0][0]&#39;] tchNormalization) activation_34 (Activation) (None, 288, 288, 512) 0 [&#39;batch_normalization_30[0][0] &#39;] conv2d_32 (Conv2D) (None, 288, 288, 512) 2359808 [&#39;activation_34[0][0]&#39;] batch_normalization_31 (Ba (None, 288, 288, 512) 2048 [&#39;conv2d_32[0][0]&#39;] tchNormalization) activation_35 (Activation) (None, 288, 288, 512) 0 [&#39;batch_normalization_31[0][0] &#39;] conv2d_33 (Conv2D) (None, 288, 288, 512) 2359808 [&#39;activation_35[0][0]&#39;] batch_normalization_32 (Ba (None, 288, 288, 512) 2048 [&#39;conv2d_33[0][0]&#39;] tchNormalization) activation_36 (Activation) (None, 288, 288, 512) 0 [&#39;batch_normalization_32[0][0] &#39;] conv2d_34 (Conv2D) (None, 288, 288, 1) 4609 [&#39;activation_36[0][0]&#39;] resizing_5 (Resizing) (None, 288, 288, 1) 0 [&#39;conv2d_34[0][0]&#39;] conv2d_41 (Conv2D) (None, 288, 288, 64) 640 [&#39;resizing_5[0][0]&#39;] conv2d_42 (Conv2D) (None, 288, 288, 64) 36928 [&#39;conv2d_41[0][0]&#39;] batch_normalization_33 (Ba (None, 288, 288, 64) 256 [&#39;conv2d_42[0][0]&#39;] tchNormalization) activation_37 (Activation) (None, 288, 288, 64) 0 [&#39;batch_normalization_33[0][0] &#39;] max_pooling2d_2 (MaxPoolin (None, 144, 144, 64) 0 [&#39;activation_37[0][0]&#39;] g2D) conv2d_43 (Conv2D) (None, 144, 144, 64) 36928 [&#39;max_pooling2d_2[0][0]&#39;] batch_normalization_34 (Ba (None, 144, 144, 64) 256 [&#39;conv2d_43[0][0]&#39;] tchNormalization) activation_38 (Activation) (None, 144, 144, 64) 0 [&#39;batch_normalization_34[0][0] &#39;] max_pooling2d_3 (MaxPoolin (None, 72, 72, 64) 0 [&#39;activation_38[0][0]&#39;] g2D) conv2d_44 (Conv2D) (None, 72, 72, 64) 36928 [&#39;max_pooling2d_3[0][0]&#39;] batch_normalization_35 (Ba (None, 72, 72, 64) 256 [&#39;conv2d_44[0][0]&#39;] tchNormalization) activation_39 (Activation) (None, 72, 72, 64) 0 [&#39;batch_normalization_35[0][0] &#39;] max_pooling2d_4 (MaxPoolin (None, 36, 36, 64) 0 [&#39;activation_39[0][0]&#39;] g2D) conv2d_45 (Conv2D) (None, 36, 36, 64) 36928 [&#39;max_pooling2d_4[0][0]&#39;] batch_normalization_36 (Ba (None, 36, 36, 64) 256 [&#39;conv2d_45[0][0]&#39;] tchNormalization) activation_40 (Activation) (None, 36, 36, 64) 0 [&#39;batch_normalization_36[0][0] &#39;] max_pooling2d_5 (MaxPoolin (None, 18, 18, 64) 0 [&#39;activation_40[0][0]&#39;] g2D) conv2d_46 (Conv2D) (None, 18, 18, 64) 36928 [&#39;max_pooling2d_5[0][0]&#39;] batch_normalization_37 (Ba (None, 18, 18, 64) 256 [&#39;conv2d_46[0][0]&#39;] tchNormalization) activation_41 (Activation) (None, 18, 18, 64) 0 [&#39;batch_normalization_37[0][0] &#39;] resizing_12 (Resizing) (None, 36, 36, 64) 0 [&#39;activation_41[0][0]&#39;] concatenate_6 (Concatenate (None, 36, 36, 128) 0 [&#39;activation_40[0][0]&#39;, ) &#39;resizing_12[0][0]&#39;] conv2d_47 (Conv2D) (None, 36, 36, 64) 73792 [&#39;concatenate_6[0][0]&#39;] batch_normalization_38 (Ba (None, 36, 36, 64) 256 [&#39;conv2d_47[0][0]&#39;] tchNormalization) activation_42 (Activation) (None, 36, 36, 64) 0 [&#39;batch_normalization_38[0][0] &#39;] resizing_13 (Resizing) (None, 72, 72, 64) 0 [&#39;activation_42[0][0]&#39;] concatenate_7 (Concatenate (None, 72, 72, 128) 0 [&#39;activation_39[0][0]&#39;, ) &#39;resizing_13[0][0]&#39;] conv2d_48 (Conv2D) (None, 72, 72, 64) 73792 [&#39;concatenate_7[0][0]&#39;] batch_normalization_39 (Ba (None, 72, 72, 64) 256 [&#39;conv2d_48[0][0]&#39;] tchNormalization) activation_43 (Activation) (None, 72, 72, 64) 0 [&#39;batch_normalization_39[0][0] &#39;] resizing_14 (Resizing) (None, 144, 144, 64) 0 [&#39;activation_43[0][0]&#39;] concatenate_8 (Concatenate (None, 144, 144, 128) 0 [&#39;activation_38[0][0]&#39;, ) &#39;resizing_14[0][0]&#39;] conv2d_49 (Conv2D) (None, 144, 144, 64) 73792 [&#39;concatenate_8[0][0]&#39;] batch_normalization_40 (Ba (None, 144, 144, 64) 256 [&#39;conv2d_49[0][0]&#39;] tchNormalization) activation_44 (Activation) (None, 144, 144, 64) 0 [&#39;batch_normalization_40[0][0] &#39;] resizing_15 (Resizing) (None, 288, 288, 64) 0 [&#39;activation_44[0][0]&#39;] concatenate_9 (Concatenate (None, 288, 288, 128) 0 [&#39;activation_37[0][0]&#39;, ) &#39;resizing_15[0][0]&#39;] conv2d_50 (Conv2D) (None, 288, 288, 64) 73792 [&#39;concatenate_9[0][0]&#39;] batch_normalization_41 (Ba (None, 288, 288, 64) 256 [&#39;conv2d_50[0][0]&#39;] tchNormalization) activation_45 (Activation) (None, 288, 288, 64) 0 [&#39;batch_normalization_41[0][0] &#39;] conv2d_51 (Conv2D) (None, 288, 288, 1) 577 [&#39;activation_45[0][0]&#39;] conv2d_35 (Conv2D) (None, 144, 144, 1) 4609 [&#39;activation_33[0][0]&#39;] conv2d_36 (Conv2D) (None, 72, 72, 1) 4609 [&#39;activation_30[0][0]&#39;] conv2d_37 (Conv2D) (None, 36, 36, 1) 4609 [&#39;activation_27[0][0]&#39;] conv2d_38 (Conv2D) (None, 18, 18, 1) 4609 [&#39;activation_24[0][0]&#39;] conv2d_39 (Conv2D) (None, 9, 9, 1) 4609 [&#39;activation_21[0][0]&#39;] conv2d_40 (Conv2D) (None, 9, 9, 1) 4609 [&#39;activation_18[0][0]&#39;] add_6 (Add) (None, 288, 288, 1) 0 [&#39;resizing_5[0][0]&#39;, &#39;conv2d_51[0][0]&#39;] resizing_6 (Resizing) (None, 288, 288, 1) 0 [&#39;conv2d_35[0][0]&#39;] resizing_7 (Resizing) (None, 288, 288, 1) 0 [&#39;conv2d_36[0][0]&#39;] resizing_8 (Resizing) (None, 288, 288, 1) 0 [&#39;conv2d_37[0][0]&#39;] resizing_9 (Resizing) (None, 288, 288, 1) 0 [&#39;conv2d_38[0][0]&#39;] resizing_10 (Resizing) (None, 288, 288, 1) 0 [&#39;conv2d_39[0][0]&#39;] resizing_11 (Resizing) (None, 288, 288, 1) 0 [&#39;conv2d_40[0][0]&#39;] activation_46 (Activation) (None, 288, 288, 1) 0 [&#39;add_6[0][0]&#39;] activation_47 (Activation) (None, 288, 288, 1) 0 [&#39;resizing_5[0][0]&#39;] activation_48 (Activation) (None, 288, 288, 1) 0 [&#39;resizing_6[0][0]&#39;] activation_49 (Activation) (None, 288, 288, 1) 0 [&#39;resizing_7[0][0]&#39;] activation_50 (Activation) (None, 288, 288, 1) 0 [&#39;resizing_8[0][0]&#39;] activation_51 (Activation) (None, 288, 288, 1) 0 [&#39;resizing_9[0][0]&#39;] activation_52 (Activation) (None, 288, 288, 1) 0 [&#39;resizing_10[0][0]&#39;] activation_53 (Activation) (None, 288, 288, 1) 0 [&#39;resizing_11[0][0]&#39;] ================================================================================================== Total params: 108886792 (415.37 MB) Trainable params: 108834952 (415.17 MB) Non-trainable params: 51840 (202.50 KB) __________________________________________________________________________________________________ </code></pre></div> </div> <h3 id="train-the-model">Train the Model</h3> <div class="codehilite"><pre><span></span><code><span class="n">basnet_model</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">train_dataset</span><span class="p">,</span> <span class="n">validation_data</span><span class="o">=</span><span class="n">val_dataset</span><span class="p">,</span> <span class="n">epochs</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> </code></pre></div> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code>32/32 [==============================] - 153s 2s/step - loss: 16.3507 - activation_46_loss: 2.1445 - activation_47_loss: 2.1512 - activation_48_loss: 2.0621 - activation_49_loss: 2.0755 - activation_50_loss: 2.1406 - activation_51_loss: 1.9035 - activation_52_loss: 1.8702 - activation_53_loss: 2.0031 - activation_46_mae: 0.2972 - activation_47_mae: 0.3126 - activation_48_mae: 0.2793 - activation_49_mae: 0.2887 - activation_50_mae: 0.3280 - activation_51_mae: 0.2548 - activation_52_mae: 0.2330 - activation_53_mae: 0.2564 - val_loss: 18.4498 - val_activation_46_loss: 2.3113 - val_activation_47_loss: 2.3143 - val_activation_48_loss: 2.3356 - val_activation_49_loss: 2.3093 - val_activation_50_loss: 2.3187 - val_activation_51_loss: 2.3943 - val_activation_52_loss: 2.2712 - val_activation_53_loss: 2.1952 - val_activation_46_mae: 0.2770 - val_activation_47_mae: 0.2681 - val_activation_48_mae: 0.2424 - val_activation_49_mae: 0.2691 - val_activation_50_mae: 0.2765 - val_activation_51_mae: 0.1907 - val_activation_52_mae: 0.1885 - val_activation_53_mae: 0.2938 &lt;keras.src.callbacks.History at 0x79b024bd83a0&gt; </code></pre></div> </div> <h3 id="visualize-predictions">Visualize Predictions</h3> <p>In paper BASNet was trained on DUTS-TR dataset, which has 10553 images. Model was trained for 400k iterations with a batch size of eight and without a validation dataset. After training model was evaluated on DUTS-TE dataset and achieved a mean absolute error of <code>0.042</code>.</p> <p>Since BASNet is a deep model and cannot be trained in a short amount of time which is a requirement for keras example notebook, so we will load pretrained weights from <a href="https://github.com/hamidriasat/BASNet/tree/basnet_keras">here</a> to show model prediction. Due to computer power limitation this model was trained for 120k iterations but it still demonstrates its capabilities. For further details about trainings parameters please check given link.</p> <div class="codehilite"><pre><span></span><code><span class="err">!!</span><span class="n">gdown</span> <span class="mi">1</span><span class="n">OWKouuAQ7XpXZbWA3mmxDPrFGW71Axrg</span> </code></pre></div> <div class="codehilite"><pre><span></span><code><span class="k">def</span> <span class="nf">normalize_output</span><span class="p">(</span><span class="n">prediction</span><span class="p">):</span> <span class="n">max_value</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">max</span><span class="p">(</span><span class="n">prediction</span><span class="p">)</span> <span class="n">min_value</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">min</span><span class="p">(</span><span class="n">prediction</span><span class="p">)</span> <span class="k">return</span> <span class="p">(</span><span class="n">prediction</span> <span class="o">-</span> <span class="n">min_value</span><span class="p">)</span> <span class="o">/</span> <span class="p">(</span><span class="n">max_value</span> <span class="o">-</span> <span class="n">min_value</span><span class="p">)</span> <span class="c1"># Load weights.</span> <span class="n">basnet_model</span><span class="o">.</span><span class="n">load_weights</span><span class="p">(</span><span class="s2">&quot;./basnet_weights.h5&quot;</span><span class="p">)</span> </code></pre></div> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code>[&#39;Downloading...&#39;, &#39;From: https://drive.google.com/uc?id=1OWKouuAQ7XpXZbWA3mmxDPrFGW71Axrg&#39;, &#39;To: /content/keras-io/scripts/tmp_3792671/basnet_weights.h5&#39;, &#39;&#39;, &#39; 0% 0.00/436M [00:00&lt;?, ?B/s]&#39;, &#39; 1% 4.72M/436M [00:00&lt;00:25, 16.7MB/s]&#39;, &#39; 4% 17.3M/436M [00:00&lt;00:13, 31.5MB/s]&#39;, &#39; 7% 30.9M/436M [00:00&lt;00:07, 54.5MB/s]&#39;, &#39; 9% 38.8M/436M [00:00&lt;00:08, 48.2MB/s]&#39;, &#39; 12% 50.9M/436M [00:01&lt;00:08, 45.2MB/s]&#39;, &#39; 15% 65.0M/436M [00:01&lt;00:05, 62.2MB/s]&#39;, &#39; 17% 73.4M/436M [00:01&lt;00:07, 50.6MB/s]&#39;, &#39; 19% 84.4M/436M [00:01&lt;00:07, 48.3MB/s]&#39;, &#39; 23% 100M/436M [00:01&lt;00:05, 66.7MB/s] &#39;, &#39; 25% 110M/436M [00:02&lt;00:05, 59.1MB/s]&#39;, &#39; 27% 118M/436M [00:02&lt;00:06, 48.4MB/s]&#39;, &#39; 31% 135M/436M [00:02&lt;00:05, 52.7MB/s]&#39;, &#39; 35% 152M/436M [00:02&lt;00:04, 70.2MB/s]&#39;, &#39; 37% 161M/436M [00:03&lt;00:04, 56.9MB/s]&#39;, &#39; 42% 185M/436M [00:03&lt;00:04, 56.2MB/s]&#39;, &#39; 48% 210M/436M [00:03&lt;00:03, 65.0MB/s]&#39;, &#39; 53% 231M/436M [00:03&lt;00:02, 83.6MB/s]&#39;, &#39; 56% 243M/436M [00:04&lt;00:02, 71.4MB/s]&#39;, &#39; 60% 261M/436M [00:04&lt;00:02, 73.9MB/s]&#39;, &#39; 62% 272M/436M [00:04&lt;00:02, 80.1MB/s]&#39;, &#39; 66% 286M/436M [00:04&lt;00:01, 79.3MB/s]&#39;, &#39; 68% 295M/436M [00:04&lt;00:01, 81.2MB/s]&#39;, &#39; 71% 308M/436M [00:04&lt;00:01, 91.3MB/s]&#39;, &#39; 73% 319M/436M [00:04&lt;00:01, 88.2MB/s]&#39;, &#39; 75% 329M/436M [00:05&lt;00:01, 83.5MB/s]&#39;, &#39; 78% 339M/436M [00:05&lt;00:01, 87.6MB/s]&#39;, &#39; 81% 353M/436M [00:05&lt;00:00, 90.4MB/s]&#39;, &#39; 83% 362M/436M [00:05&lt;00:00, 87.0MB/s]&#39;, &#39; 87% 378M/436M [00:05&lt;00:00, 104MB/s] &#39;, &#39; 89% 389M/436M [00:05&lt;00:00, 101MB/s]&#39;, &#39; 93% 405M/436M [00:05&lt;00:00, 115MB/s]&#39;, &#39; 96% 417M/436M [00:05&lt;00:00, 110MB/s]&#39;, &#39; 98% 428M/436M [00:06&lt;00:00, 91.4MB/s]&#39;, &#39;100% 436M/436M [00:06&lt;00:00, 71.3MB/s]&#39;] </code></pre></div> </div> <h3 id="make-predictions">Make Predictions</h3> <div class="codehilite"><pre><span></span><code><span class="k">for</span> <span class="n">image</span><span class="p">,</span> <span class="n">mask</span> <span class="ow">in</span> <span class="n">val_dataset</span><span class="o">.</span><span class="n">take</span><span class="p">(</span><span class="mi">1</span><span class="p">):</span> <span class="n">pred_mask</span> <span class="o">=</span> <span class="n">basnet_model</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">image</span><span class="p">)</span> <span class="n">display</span><span class="p">([</span><span class="n">image</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">mask</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">normalize_output</span><span class="p">(</span><span class="n">pred_mask</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">])])</span> </code></pre></div> <div class="k-default-codeblock"> <div class="codehilite"><pre><span></span><code>1/1 [==============================] - 2s 2s/step </code></pre></div> </div> <p><img alt="png" src="/img/examples/vision/basnet_segmentation/basnet_segmentation_29_1.png" /></p> </div> <div class='k-outline'> <div class='k-outline-depth-1'> <a href='#highly-accurate-boundaries-segmentation-using-basnet'>Highly accurate boundaries segmentation using BASNet</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#introduction'>Introduction</a> </div> <div class='k-outline-depth-3'> <a href='#references'>References:</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#download-the-data'>Download the Data</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#define-hyperparameters'>Define Hyperparameters</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#create-tensorflow-dataset'>Create TensorFlow Dataset</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#visualize-data'>Visualize Data</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#analyze-mask'>Analyze Mask</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#building-the-basnet-model'>Building the BASNet Model</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#prediction-module'>Prediction Module</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#residual-refinement-module'>Residual Refinement Module</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#combine-predict-and-refinement-module'>Combine Predict and Refinement Module</a> </div> <div class='k-outline-depth-2'> ◆ <a href='#hybrid-loss'>Hybrid Loss</a> </div> <div class='k-outline-depth-3'> <a href='#train-the-model'>Train the Model</a> </div> <div class='k-outline-depth-3'> <a href='#visualize-predictions'>Visualize Predictions</a> </div> <div class='k-outline-depth-3'> <a href='#make-predictions'>Make Predictions</a> </div> </div> </div> </div> </div> </body> <footer style="float: left; width: 100%; padding: 1em; border-top: solid 1px #bbb;"> <a href="https://policies.google.com/terms">Terms</a> | <a href="https://policies.google.com/privacy">Privacy</a> </footer> </html>

Pages: 1 2 3 4 5 6 7 8 9 10