CINXE.COM
S2-MLP: Spatial-Shift MLP Architecture for Vision
<meta http-equiv="Content-Type" content="text/html;charset=UTF-8"><base href="https://openaccess.thecvf.com/content/WACV2022/papers/Yu_S2-MLP_Spatial-Shift_MLP_Architecture_for_Vision_WACV_2022_paper.pdf"><div style="background:#fff;border:1px solid #999;margin:-1px -1px 0;padding:0;"><div style="background:#ddd;border:1px solid #999;color:#000;font:13px arial,sans-serif;font-weight:normal;margin:12px;padding:8px;text-align:left">This is the html version of the file <a href="http://openaccess.thecvf.com/content/WACV2022/html/Yu_S2-MLP_Spatial-Shift_MLP_Architecture_for_Vision_WACV_2022_paper.html"><font color=blue>http://openaccess.thecvf.com/content/WACV2022/html/Yu_S2-MLP_Spatial-Shift_MLP_Architecture_for_Vision_WACV_2022_paper.html</font></a>.<br> Google automatically generates html versions of documents as we crawl the web.</div></div><div style="position:relative"><html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta name="Author" content="Tan Yu; Xu Li; Yunfeng Cai; Mingming Sun; Ping Li"> <meta name="Producer" content="pikepdf 4.1.0"> <meta name="Subject" content="IEEE Winter Conference on Applications of Computer Vision"> <meta name="Title" content="S2-MLP: Spatial-Shift MLP Architecture for Vision"> <title>S2-MLP: Spatial-Shift MLP Architecture for Vision</title> </head><body bgcolor=#ffffff vlink="blue" link="blue"> <table border=0 width=100%><tr><td bgcolor=eeeeee align=right><font face=arial,sans-serif><a name=1><b>Page 1</b></a></font></td></tr></table><font size=3 face="Times"><span style="font-size:19px;font-family:Times"> <div style="position:absolute;top:336;left:209"><nobr>S<font style="font-size:12px">2</font>-MLP: Spatial-Shift MLP Architecture for Vision</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:15px;font-family:Times"> <div style="position:absolute;top:395;left:256"><nobr>Tan Yu, Xu Li, Yunfeng Cai, Mingming Sun, Ping Li</nobr></div> <div style="position:absolute;top:416;left:353"><nobr>Cognitive Computing Lab</nobr></div> <div style="position:absolute;top:437;left:390"><nobr>Baidu Research</nobr></div> <div style="position:absolute;top:458;left:256"><nobr>10900 NE 8th St. Bellevue, Washington 98004, USA</nobr></div> <div style="position:absolute;top:479;left:258"><nobr>No.10 Xibeiwang East Road, Beijing 100193, China</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:11px;font-family:Times"> <div style="position:absolute;top:502;left:205"><nobr>{tanyu01,lixu13,caiyunfeng,sunmingming01,liping11}@baidu.com</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:15px;font-family:Times"> <div style="position:absolute;top:561;left:219"><nobr>Abstract</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:590;left:93"><nobr>Recently, visual Transformer (ViT) and its following</nobr></div> <div style="position:absolute;top:608;left:75"><nobr>works abandon the convolution and exploit the self-attention</nobr></div> <div style="position:absolute;top:626;left:75"><nobr>operation, attaining a comparable or even higher accuracy</nobr></div> <div style="position:absolute;top:644;left:75"><nobr>than CNN. More recently, MLP-mixer abandons both the</nobr></div> <div style="position:absolute;top:662;left:75"><nobr>convolution and the self-attention operation, proposing an</nobr></div> <div style="position:absolute;top:680;left:75"><nobr>architecture containing only MLP layers. To achieve cross-</nobr></div> <div style="position:absolute;top:698;left:75"><nobr>patch communications, it devises an additional token-mixing</nobr></div> <div style="position:absolute;top:716;left:75"><nobr>MLP besides the channel-mixing MLP. It achieves promising</nobr></div> <div style="position:absolute;top:734;left:75"><nobr>results when training on an extremely large-scale dataset</nobr></div> <div style="position:absolute;top:752;left:75"><nobr>such as JFT-300M. But it cannot achieve as outstanding</nobr></div> <div style="position:absolute;top:769;left:75"><nobr>performance as its CNN and ViT counterparts when train-</nobr></div> <div style="position:absolute;top:787;left:75"><nobr>ing on medium-scale datasets such as ImageNet-1K. The</nobr></div> <div style="position:absolute;top:805;left:75"><nobr>performance drop of MLP-mixer motivates us to rethink the</nobr></div> <div style="position:absolute;top:823;left:75"><nobr>token-mixing MLP. We discover that token-mixing operation</nobr></div> <div style="position:absolute;top:841;left:75"><nobr>in MLP-mixer is a variant of depthwise convolution with</nobr></div> <div style="position:absolute;top:859;left:75"><nobr>a global reception field and spatial-specific configuration.</nobr></div> <div style="position:absolute;top:877;left:75"><nobr>In this paper, we propose a novel pure MLP architecture,</nobr></div> <div style="position:absolute;top:895;left:75"><nobr>spatial-shift MLP (S<font style="font-size:8px">2</font>-MLP). Different from MLP-mixer, our</nobr></div> <div style="position:absolute;top:913;left:75"><nobr>S<font style="font-size:8px">2</font>-MLP only contains channel-mixing MLP. We devise a</nobr></div> <div style="position:absolute;top:931;left:75"><nobr>spatial-shift operation for achieving the communication be-</nobr></div> <div style="position:absolute;top:949;left:75"><nobr>tween patches. It has a local reception field and is spatial-</nobr></div> <div style="position:absolute;top:967;left:75"><nobr>agnostic. Meanwhile, it is parameter-free and efficient for</nobr></div> <div style="position:absolute;top:985;left:75"><nobr>computation. The proposed S<font style="font-size:8px">2</font>-MLP attains higher recogni-</nobr></div> <div style="position:absolute;top:1003;left:75"><nobr>tion accuracy than MLP-mixer when training on ImageNet-</nobr></div> <div style="position:absolute;top:1021;left:74"><nobr>1K dataset. Meanwhile, S<font style="font-size:8px">2</font>-MLP accomplishes as excellent</nobr></div> <div style="position:absolute;top:1038;left:75"><nobr>performance as ViT on ImageNet-1K dataset with consider-</nobr></div> <div style="position:absolute;top:1056;left:75"><nobr>ably simpler architecture and fewer FLOPs and parameters.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:15px;font-family:Times"> <div style="position:absolute;top:1109;left:75"><nobr>1. Introduction</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:1142;left:93"><nobr>In the past years, convolutional neural networks</nobr></div> <div style="position:absolute;top:1160;left:75"><nobr>(CNN) [23<a href="#9">, </a>14] have achieved great success in computer vi-</nobr></div> <div style="position:absolute;top:1178;left:75"><nobr>sion. Recently, inspired by the triumph of Transformer <a href="#10">[41</a>]</nobr></div> <div style="position:absolute;top:1196;left:75"><nobr>in natural language processing, vision Transformer (ViT) [<a href="#9">9</a>]</nobr></div> <div style="position:absolute;top:1213;left:75"><nobr>is proposed. It replaces the convolution operation in CNN</nobr></div> <div style="position:absolute;top:1231;left:75"><nobr>with the self-attention operation used in Transformer to</nobr></div> <div style="position:absolute;top:563;left:463"><nobr>model the visual relations between local patches in differ-</nobr></div> <div style="position:absolute;top:581;left:463"><nobr>ent spatial locations of the image. ViT and its following</nobr></div> <div style="position:absolute;top:599;left:463"><nobr>works [<a href="#10">38,</a> 46, 42, 28, 13, 44, 39] have achieved compara-</nobr></div> <div style="position:absolute;top:617;left:463"><nobr>ble or even better performance than CNNs. Compared with</nobr></div> <div style="position:absolute;top:635;left:463"><nobr>CNN demanding a meticulous design for convolution kernel,</nobr></div> <div style="position:absolute;top:653;left:463"><nobr>ViT stacks several standard Transformer blocks, taking less</nobr></div> <div style="position:absolute;top:671;left:463"><nobr>hand-crafted manipulation and reducing the inductive biases.</nobr></div> <div style="position:absolute;top:691;left:481"><nobr>More recently, MLP-mixer [36<a href="#10">] </a>proposes a simpler alter-</nobr></div> <div style="position:absolute;top:709;left:463"><nobr>native based entirely on multi-layer perceptrons (MLP) to</nobr></div> <div style="position:absolute;top:727;left:463"><nobr>further reduce the inductive biases. The basic block in MLP-</nobr></div> <div style="position:absolute;top:745;left:463"><nobr>mixer consists of two components: channel-mixing MLP</nobr></div> <div style="position:absolute;top:763;left:463"><nobr>and token-mixing MLP. The channel-mixing MLP projects</nobr></div> <div style="position:absolute;top:781;left:463"><nobr>the feature map along the channel dimension and thus con-</nobr></div> <div style="position:absolute;top:799;left:463"><nobr>ducts the communications between different channels. In</nobr></div> <div style="position:absolute;top:817;left:463"><nobr>parallel, the token-mixing dimension projects the feature</nobr></div> <div style="position:absolute;top:835;left:463"><nobr>map along the spatial dimension and exploits the commu-</nobr></div> <div style="position:absolute;top:852;left:463"><nobr>nications between spatial locations. When training on the</nobr></div> <div style="position:absolute;top:870;left:463"><nobr>ultra large-scale dataset such as JFT-300M<a href="#10"> [33</a>], MLP-mixer</nobr></div> <div style="position:absolute;top:888;left:463"><nobr>attains promising recognition accuracy. But there is still an</nobr></div> <div style="position:absolute;top:906;left:463"><nobr>accuracy gap between MLP-mixer and ViT when training</nobr></div> <div style="position:absolute;top:924;left:463"><nobr>on medium-scale datasets including ImageNet-1K and Im-</nobr></div> <div style="position:absolute;top:942;left:463"><nobr>ageNet21K [8<a href="#9">].</a> Specifically, Mixer-Base-16 [36] achieves</nobr></div> <div style="position:absolute;top:960;left:463"><nobr>only a 76.44% top-1 accuracy on ImageNet-1K, whereas</nobr></div> <div style="position:absolute;top:978;left:463"><nobr>ViT-Base-16 [<a href="#9">9]</a> achieves a 79.67% top-1 accuracy.</nobr></div> <div style="position:absolute;top:998;left:481"><nobr>The unsatisfactory performance of MLP-mixer on</nobr></div> <div style="position:absolute;top:1016;left:463"><nobr>ImageNet-1K motivates us to rethink the mixing-token</nobr></div> <div style="position:absolute;top:1034;left:463"><nobr>MLP. Given N patch features in the matrix form, X =</nobr></div> <div style="position:absolute;top:1052;left:463"><nobr>[x<font style="font-size:8px">1</font>, ··· , x<font style="font-size:8px">N </font>], the token-mixing MLP conducts XW where</nobr></div> <div style="position:absolute;top:1070;left:462"><nobr>W ∈ R<font style="font-size:8px">N×M </font>is the weight matrix. It is easy to observe</nobr></div> <div style="position:absolute;top:1088;left:463"><nobr>that each column of XW, the output of the token-mixing</nobr></div> <div style="position:absolute;top:1106;left:463"><nobr>MLP, is a weighted summation of patch features (columns</nobr></div> <div style="position:absolute;top:1124;left:463"><nobr>in X). The weights in summation can be regarded as the at-</nobr></div> <div style="position:absolute;top:1142;left:463"><nobr>tention in Transformer. But the self-attention in Transformer</nobr></div> <div style="position:absolute;top:1160;left:463"><nobr>is data-dependent, whereas the weights for summation in</nobr></div> <div style="position:absolute;top:1178;left:463"><nobr>token-mixing MLP is agnostic to the input. To some extent,</nobr></div> <div style="position:absolute;top:1196;left:463"><nobr>the weighted summation is more like depthwise convolu-</nobr></div> <div style="position:absolute;top:1213;left:463"><nobr>tion <a href="#9">[5</a>, 19, 20]. But the depthwise convolution only has a</nobr></div> <div style="position:absolute;top:1231;left:463"><nobr>local reception field. In contrast, token-mixing MLP has</nobr></div> </span></font> <font size=3 color="#7f7f7f" face="Times"><span style="font-size:11px;font-family:Times;color:#7f7f7f"> <div style="position:absolute;top:1301;left:449"><nobr>297</nobr></div> </span></font> <a href="#9" style="border:1px solid #0000ff;position:absolute;top:1158;left:146;width:18;height:13;display:block"></a><a href="#10" style="border:1px solid #0000ff;position:absolute;top:598;left:531;width:18;height:13;display:block"></a><a href="#10" style="border:1px solid #0000ff;position:absolute;top:598;left:554;width:18;height:13;display:block"></a><a href="#9" style="border:1px solid #0000ff;position:absolute;top:598;left:577;width:18;height:13;display:block"></a><a href="#9" style="border:1px solid #0000ff;position:absolute;top:598;left:600;width:18;height:13;display:block"></a><a href="#10" style="border:1px solid #0000ff;position:absolute;top:598;left:623;width:18;height:13;display:block"></a><a href="#10" style="border:1px solid #0000ff;position:absolute;top:598;left:646;width:18;height:13;display:block"></a><a href="#10" style="border:1px solid #0000ff;position:absolute;top:941;left:740;width:18;height:13;display:block"></a><a href="#9" style="border:1px solid #0000ff;position:absolute;top:1212;left:510;width:18;height:13;display:block"></a><a href="#9" style="border:1px solid #0000ff;position:absolute;top:1212;left:533;width:18;height:13;display:block"></a> <div style="position:absolute;top:1363;left:0"><hr><table border=0 width=100%><tr><td bgcolor=eeeeee align=right><font face=arial,sans-serif><a name=2><b>Page 2</b></a></font></td></tr></table></div><font size=2 face="Times"><span style="font-size:6px;font-family:Times"> <div style="position:absolute;top:1649;left:129"><nobr>patch-wise fully-connected layer</nobr></div> <div style="position:absolute;top:1565;left:154"><nobr>N x (S<font style="font-size:5px">2</font>-MLP block)</nobr></div> <div style="position:absolute;top:1698;left:122"><nobr>non-overlap patches from an image</nobr></div> <div style="position:absolute;top:1664;left:353"><nobr>GELU</nobr></div> <div style="position:absolute;top:1682;left:333"><nobr>fully-connected 1</nobr></div> <div style="position:absolute;top:1646;left:344"><nobr>spatial-shift</nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:4px;font-family:Times"> <div style="position:absolute;top:1589;left:377"><nobr>skip</nobr></div> <div style="position:absolute;top:1598;left:367"><nobr>connection</nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:6px;font-family:Times"> <div style="position:absolute;top:1698;left:328"><nobr>S<font style="font-size:4px">2</font>-MLP</nobr></div> <div style="position:absolute;top:1498;left:196"><nobr>average-pooling</nobr></div> <div style="position:absolute;top:1738;left:177"><nobr>group</nobr></div> <div style="position:absolute;top:1739;left:298"><nobr>shift</nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:9px;font-family:Times"> <div style="position:absolute;top:1729;left:131"><nobr>w</nobr></div> <div style="position:absolute;top:1724;left:99"><nobr>h</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:10px;font-family:Times"> <div style="position:absolute;top:1743;left:109"><nobr>c</nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:6px;font-family:Times"> <div style="position:absolute;top:1778;left:211"><nobr>spatial-shift module</nobr></div> <div style="position:absolute;top:1486;left:108"><nobr>fully-connected</nobr></div> <div style="position:absolute;top:1546;left:353"><nobr>GELU</nobr></div> <div style="position:absolute;top:1565;left:333"><nobr>fully-connected 3</nobr></div> <div style="position:absolute;top:1526;left:333"><nobr>fully-connected 4</nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:4px;font-family:Times"> <div style="position:absolute;top:1484;left:377"><nobr>skip</nobr></div> <div style="position:absolute;top:1492;left:366"><nobr>connection</nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:6px;font-family:Times"> <div style="position:absolute;top:1609;left:355"><nobr>norm</nobr></div> <div style="position:absolute;top:1507;left:355"><nobr>norm</nobr></div> <div style="position:absolute;top:1627;left:333"><nobr>fully-connected 2</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:11px;font-family:Times"> <div style="position:absolute;top:1800;left:75"><nobr>Figure 1. The architecture of the proposed spatial-shift multi-layer</nobr></div> <div style="position:absolute;top:1817;left:75"><nobr>perceptions (S<font style="font-size:6px">2</font>-MLP) model. Non-overlap patches cropped from</nobr></div> <div style="position:absolute;top:1833;left:75"><nobr>an image are the input of the model. They go through a stack of</nobr></div> <div style="position:absolute;top:1849;left:75"><nobr>S<font style="font-size:6px">2</font>-MLP blocks which are further aggregated into a single feature</nobr></div> <div style="position:absolute;top:1866;left:75"><nobr>vector through global average pooling. After that, the feature vector</nobr></div> <div style="position:absolute;top:1882;left:75"><nobr>is fed into a fully-connected layer for predicting the label. An</nobr></div> <div style="position:absolute;top:1899;left:75"><nobr>S<font style="font-size:6px">2</font>-MLP block contains four fully-connected layers, two GELU</nobr></div> <div style="position:absolute;top:1915;left:75"><nobr>layers <a href="#9">[15</a>], two layer normalization [1], two skip connections [14],</nobr></div> <div style="position:absolute;top:1932;left:75"><nobr>and a spatial-shift module. The proposed spatial-shift module</nobr></div> <div style="position:absolute;top:1948;left:75"><nobr>groups c channels into several groups. Then it shifts different</nobr></div> <div style="position:absolute;top:1965;left:75"><nobr>groups of channels in different directions.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:2007;left:75"><nobr>a global reception field. Besides, the depthwise convolu-</nobr></div> <div style="position:absolute;top:2024;left:75"><nobr>tion kernel is shared among different locations, whereas</nobr></div> <div style="position:absolute;top:2042;left:75"><nobr>the weights for summation in token-mixing MLP are dif-</nobr></div> <div style="position:absolute;top:2060;left:75"><nobr>ferent for different locations. Without the limitation of the</nobr></div> <div style="position:absolute;top:2078;left:75"><nobr>local reception field and the spatial-agnostic constraint, the</nobr></div> <div style="position:absolute;top:2096;left:75"><nobr>token-mixing MLP is more flexible and has a stronger fit-</nobr></div> <div style="position:absolute;top:2114;left:75"><nobr>ting capability. But the freedom from breaking a chain is</nobr></div> <div style="position:absolute;top:2132;left:75"><nobr>accompanied by losing the spatially-invariant property.</nobr></div> <div style="position:absolute;top:2150;left:93"><nobr>In this work, we propose a spatial-shift MLP (S<font style="font-size:8px">2</font>-MLP)</nobr></div> <div style="position:absolute;top:2168;left:75"><nobr>architecture, a conceptually simple architecture containing</nobr></div> <div style="position:absolute;top:2186;left:75"><nobr>only channel-mixing MLPs. To conduct communication be-</nobr></div> <div style="position:absolute;top:2204;left:75"><nobr>tween spatial locations, we adopt a spatial shift operation,</nobr></div> <div style="position:absolute;top:2222;left:75"><nobr>which is parameter-free and efficient for computation. Mean-</nobr></div> <div style="position:absolute;top:2240;left:75"><nobr>while, the spatial shift is spatial-agnostic and meanwhile</nobr></div> <div style="position:absolute;top:2258;left:75"><nobr>maintains a local reception field. Figure 1<a href="#2"> </a>illustrates the</nobr></div> <div style="position:absolute;top:2276;left:75"><nobr>architecture of the proposed S<font style="font-size:8px">2</font>-MLP. It crops an image into</nobr></div> <div style="position:absolute;top:2294;left:75"><nobr>w×h non-overlap patches. For each patch, it obtains the path</nobr></div> <div style="position:absolute;top:2312;left:75"><nobr>embedding through a fully-connected layer. The wh patches</nobr></div> <div style="position:absolute;top:2330;left:75"><nobr>further go through a stack of S<font style="font-size:8px">2</font>-MLP blocks for feature ex-</nobr></div> <div style="position:absolute;top:2348;left:75"><nobr>traction. Each S<font style="font-size:8px">2</font>-MLP block contains four fully-connected</nobr></div> <div style="position:absolute;top:2366;left:75"><nobr>layers. The fully-connected layer used in each S<font style="font-size:8px">2</font>-MLP</nobr></div> <div style="position:absolute;top:2384;left:75"><nobr>block serves as the same function as the channel-mixing</nobr></div> <div style="position:absolute;top:2401;left:75"><nobr>MLP used in MLP-mixer. But our S<font style="font-size:8px">2</font>-MLP does not need</nobr></div> <div style="position:absolute;top:2419;left:75"><nobr>token-mixing MLP. Instead, the communications between</nobr></div> <div style="position:absolute;top:1476;left:463"><nobr>different spatial locations are achieved through the proposed</nobr></div> <div style="position:absolute;top:1494;left:463"><nobr>spatial-shift module. It is parameter-free and simply shifts</nobr></div> <div style="position:absolute;top:1512;left:463"><nobr>channels from a patch to its adjoining patches. Despite that</nobr></div> <div style="position:absolute;top:1530;left:463"><nobr>the spatial-shift module only supports communications be-</nobr></div> <div style="position:absolute;top:1548;left:463"><nobr>tween adjacent patches, stacking a series of S<font style="font-size:8px">2</font>-MLP blocks</nobr></div> <div style="position:absolute;top:1565;left:463"><nobr>makes the long-range communications feasible.</nobr></div> <div style="position:absolute;top:1583;left:481"><nobr>The proposed S<font style="font-size:8px">2</font>-MLP is frustratingly simple and elegant</nobr></div> <div style="position:absolute;top:1601;left:463"><nobr>in architecture. It attains considerably higher recognition</nobr></div> <div style="position:absolute;top:1619;left:463"><nobr>accuracy than MLP-mixer on ImageNet1K dataset with a</nobr></div> <div style="position:absolute;top:1637;left:463"><nobr>comparable scale of parameters and FLOPs. Meanwhile,</nobr></div> <div style="position:absolute;top:1655;left:463"><nobr>it achieves a comparable recognition accuracy with respect</nobr></div> <div style="position:absolute;top:1673;left:463"><nobr>to ViT on ImageNet1K dataset with a considerably simpler</nobr></div> <div style="position:absolute;top:1691;left:463"><nobr>structure, fewer parameters and FLOPs.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:15px;font-family:Times"> <div style="position:absolute;top:1720;left:463"><nobr>2. Related Work</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:1750;left:463"><nobr>Transformer-based vision models. Visual Transformer</nobr></div> <div style="position:absolute;top:1768;left:463"><nobr>(ViT) [<a href="#9">9]</a> is the first work to build a purely Transformer-based</nobr></div> <div style="position:absolute;top:1786;left:463"><nobr>vision backbone. Through training on an extremely large-</nobr></div> <div style="position:absolute;top:1804;left:463"><nobr>scale dataset, JFT-300M [<a href="#10">33]</a>, it has achieved promising re-</nobr></div> <div style="position:absolute;top:1822;left:463"><nobr>sults compared with de facto vision backbone, convolutional</nobr></div> <div style="position:absolute;top:1840;left:463"><nobr>neural network. DeiT [<a href="#10">38]</a> adopts the advanced training and</nobr></div> <div style="position:absolute;top:1858;left:463"><nobr>augmentation strategy and achieves excellent performance</nobr></div> <div style="position:absolute;top:1876;left:463"><nobr>when training on ImageNet-1K only. Recently, several works</nobr></div> <div style="position:absolute;top:1894;left:463"><nobr>further improve the performance of visual Transformer from</nobr></div> <div style="position:absolute;top:1912;left:463"><nobr>multiple perspectives. For instance, PVT [<a href="#10">42]</a> uses a pro-</nobr></div> <div style="position:absolute;top:1930;left:463"><nobr>gressive shrinking pyramid to reduce computations of large</nobr></div> <div style="position:absolute;top:1948;left:463"><nobr>feature maps. T2T [<a href="#10">46]</a> progressively tokenizes the image to</nobr></div> <div style="position:absolute;top:1966;left:463"><nobr>model the local structure information of the image. TNT <a href="#9">[13</a>]</nobr></div> <div style="position:absolute;top:1984;left:463"><nobr>constructs another Transformer within the outer-level Trans-</nobr></div> <div style="position:absolute;top:2002;left:463"><nobr>former to model the local patch. CPVT [<a href="#9">7]</a> proposes a con-</nobr></div> <div style="position:absolute;top:2019;left:463"><nobr>ditional positional encoding to effectively encode the spatial</nobr></div> <div style="position:absolute;top:2037;left:463"><nobr>locations of patches. Visual Longformer [<a href="#10">47]</a> adopts the</nobr></div> <div style="position:absolute;top:2055;left:463"><nobr>global tokens to boost efficiency. PiT [<a href="#9">16]</a> investigates the</nobr></div> <div style="position:absolute;top:2073;left:463"><nobr>spatial dimension conversion and integrates pooling layers</nobr></div> <div style="position:absolute;top:2091;left:463"><nobr>between self-attention blocks. Swin-Transformer [<a href="#9">28]</a> adopts</nobr></div> <div style="position:absolute;top:2109;left:463"><nobr>a hierarchical architecture of high flexibility to model the</nobr></div> <div style="position:absolute;top:2127;left:463"><nobr>image at various scales. Twins [6<a href="#9">] </a>utilizes a hierarchical</nobr></div> <div style="position:absolute;top:2145;left:463"><nobr>structure consists of a locally-grouped self-attention and a</nobr></div> <div style="position:absolute;top:2163;left:463"><nobr>global sub-sampled attention. CaiT [<a href="#10">40]</a> builds and opti-</nobr></div> <div style="position:absolute;top:2181;left:463"><nobr>mizes deeper transformer networks for image classification.</nobr></div> <div style="position:absolute;top:2199;left:463"><nobr>Multi-scale vision Transformer [<a href="#9">10]</a> utilizes Transformer</nobr></div> <div style="position:absolute;top:2217;left:463"><nobr>for video recognition. Multi-view vision Transformer [<a href="#9">4</a>]</nobr></div> <div style="position:absolute;top:2235;left:463"><nobr>achieves excellent performance in 3D object recognition.</nobr></div> <div style="position:absolute;top:2258;left:463"><nobr>MLP-based vision models. MLP-Mixer [36<a href="#10">] </a>proposes a</nobr></div> <div style="position:absolute;top:2276;left:463"><nobr>conceptually and technically simple architecture solely based</nobr></div> <div style="position:absolute;top:2294;left:463"><nobr>on MLP layers. To model the communications between spa-</nobr></div> <div style="position:absolute;top:2312;left:463"><nobr>tial locations, it proposes a token-mixing MLP. Despite that</nobr></div> <div style="position:absolute;top:2330;left:463"><nobr>MLP-Mixer has achieved promising results when training on</nobr></div> <div style="position:absolute;top:2348;left:463"><nobr>a huge-scale dataset JFT-300M, it is not as good as its visual</nobr></div> <div style="position:absolute;top:2366;left:463"><nobr>Transformer counterparts when training on a medium-scale</nobr></div> <div style="position:absolute;top:2384;left:463"><nobr>dataset including ImageNet-1K and ImageNet-21K. FF [<a href="#9">30</a>]</nobr></div> <div style="position:absolute;top:2401;left:463"><nobr>adopts a similar architecture but inherits the positional em-</nobr></div> <div style="position:absolute;top:2419;left:463"><nobr>bedding from ViT. Res-MLP [37<a href="#10">]</a> also designs a pure MLP</nobr></div> </span></font> <font size=3 color="#7f7f7f" face="Times"><span style="font-size:11px;font-family:Times;color:#7f7f7f"> <div style="position:absolute;top:2489;left:449"><nobr>298</nobr></div> </span></font> <a href="#9" style="border:1px solid #0000ff;position:absolute;top:1914;left:271;width:10;height:12;display:block"></a><a href="#9" style="border:1px solid #0000ff;position:absolute;top:1914;left:409;width:16;height:12;display:block"></a> <div style="position:absolute;top:2551;left:0"><hr><table border=0 width=100%><tr><td bgcolor=eeeeee align=right><font face=arial,sans-serif><a name=3><b>Page 3</b></a></font></td></tr></table></div><font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:2664;left:75"><nobr>architecture. It proposes an affine transform layer which</nobr></div> <div style="position:absolute;top:2682;left:75"><nobr>facilities stacking a huge number of MLP blocks. Using a</nobr></div> <div style="position:absolute;top:2700;left:75"><nobr>deeper architecture than MLP-mixer, Res-MLP achieves a</nobr></div> <div style="position:absolute;top:2718;left:75"><nobr>higher accuracy than MLP-mixer and a comparable recogni-</nobr></div> <div style="position:absolute;top:2736;left:75"><nobr>tion accuracy as ViT. gMLP [<a href="#9">27]</a> designs a gating operation</nobr></div> <div style="position:absolute;top:2753;left:75"><nobr>to enhance the communications between spatial locations</nobr></div> <div style="position:absolute;top:2771;left:75"><nobr>and achieves a comparable recognition accuracy compared</nobr></div> <div style="position:absolute;top:2789;left:75"><nobr>with DeiT. EA [12<a href="#9">] </a>replaces the self-attention module with</nobr></div> <div style="position:absolute;top:2807;left:75"><nobr>an external attention through external memories learned from</nobr></div> <div style="position:absolute;top:2825;left:75"><nobr>the training data. It is implemented by a cascade of two linear</nobr></div> <div style="position:absolute;top:2843;left:75"><nobr>layers. CCS-MLP [45<a href="#10">] </a>rethinks the design of token-mixing</nobr></div> <div style="position:absolute;top:2861;left:75"><nobr>MLP and proposes a channel-specific circulant token-mixing</nobr></div> <div style="position:absolute;top:2879;left:75"><nobr>MLP. More advanced works [25<a href="#9">, 3</a>, 18, 32] adapt hierarchi-</nobr></div> <div style="position:absolute;top:2897;left:75"><nobr>cal pyramid to enhance the representing power.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:15px;font-family:Times"> <div style="position:absolute;top:2930;left:75"><nobr>3. Method</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:14px;font-family:Times"> <div style="position:absolute;top:2960;left:75"><nobr>3.1. Preliminary</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:2987;left:75"><nobr>Layer Normalization (LN) [<a href="#9">1]</a>. Given a c-dimensional</nobr></div> <div style="position:absolute;top:3006;left:75"><nobr>vector x = [x<font style="font-size:8px">1</font>, ··· ,x<font style="font-size:8px">c</font>], layer normalization computes</nobr></div> <div style="position:absolute;top:3024;left:75"><nobr>the mean µ = <font style="font-size:8px">1</font></nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:8px;font-family:Times"> <div style="position:absolute;top:3032;left:188"><nobr>c</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:3022;left:198"><nobr>∑<font style="font-size:8px">c</font></nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:8px;font-family:Times"> <div style="position:absolute;top:3031;left:214"><nobr>i=1 <font style="font-size:12px">x</font>i <font style="font-size:12px">and the standard deviation</font></nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:3048;left:75"><nobr>σ =</nobr></div> <div style="position:absolute;top:3041;left:111"><nobr>√</nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:8px;font-family:Times"> <div style="position:absolute;top:3046;left:127"><nobr>1</nobr></div> <div style="position:absolute;top:3057;left:128"><nobr>c</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:3047;left:138"><nobr>∑<font style="font-size:8px">c</font></nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:8px;font-family:Times"> <div style="position:absolute;top:3056;left:153"><nobr>i=1<font style="font-size:12px">(x</font>i <font style="font-size:12px">− µ)</font>2<font style="font-size:12px">. It normalizes each entry in x</font></nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:3072;left:75"><nobr>by ¯x<font style="font-size:8px">i </font>= <font style="font-size:8px">x</font><font style="font-size:5px">i</font><font style="font-size:8px">−µ</font></nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:8px;font-family:Times"> <div style="position:absolute;top:3080;left:139"><nobr>σ</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:3072;left:159"><nobr>, ∀i ∈ [1,c].</nobr></div> <div style="position:absolute;top:3095;left:75"><nobr>Gaussian Error Linear Units (GELU) is defined as</nobr></div> <div style="position:absolute;top:3113;left:75"><nobr>GELU(x) = xΦ(x), where Φ(x) is the standard Gaus-</nobr></div> <div style="position:absolute;top:3132;left:75"><nobr>sian cumulative distribution function defined as Φ(x) =</nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:8px;font-family:Times"> <div style="position:absolute;top:3146;left:77"><nobr>1</nobr></div> <div style="position:absolute;top:3157;left:77"><nobr>2</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:3149;left:85"><nobr>[1 + erf(x/</nobr></div> <div style="position:absolute;top:3136;left:155"><nobr>√</nobr></div> <div style="position:absolute;top:3149;left:167"><nobr>2)].</nobr></div> <div style="position:absolute;top:3173;left:75"><nobr>MLP-Mixer [36<a href="#10">] </a>stacks N blocks. Each block consists of</nobr></div> <div style="position:absolute;top:3191;left:75"><nobr>two types of MLP layers: channel-mixing MLP and token-</nobr></div> <div style="position:absolute;top:3209;left:75"><nobr>mixing MLP. We denote a patch feature by p<font style="font-size:8px">i </font>∈ R<font style="font-size:8px">c </font>and an</nobr></div> <div style="position:absolute;top:3227;left:75"><nobr>image with n patch features by P = [p<font style="font-size:8px">1</font>, ··· , p<font style="font-size:8px">n</font>]. Channel-</nobr></div> <div style="position:absolute;top:3245;left:75"><nobr>mixing MLP projects P along the channel dimension:</nobr></div> <div style="position:absolute;top:3272;left:146"><nobr>ˆP = P + W<font style="font-size:8px">2</font>GELU(W<font style="font-size:8px">1</font>LN(P)).</nobr></div> <div style="position:absolute;top:3276;left:412"><nobr>(1)</nobr></div> <div style="position:absolute;top:3308;left:75"><nobr>Token-mixing MLP projects the channel-mixed patch fea-</nobr></div> <div style="position:absolute;top:3326;left:75"><nobr>turesˆP along the spatial dimension:</nobr></div> <div style="position:absolute;top:3354;left:146"><nobr>¯P = ˆP + GELU(LN( ˆP)W<font style="font-size:8px">3</font>)W<font style="font-size:8px">4</font>,</nobr></div> <div style="position:absolute;top:3358;left:412"><nobr>(2)</nobr></div> <div style="position:absolute;top:3390;left:75"><nobr>where W<font style="font-size:8px">3 </font>∈ R<font style="font-size:8px">N× ¯N </font>and W<font style="font-size:8px">4 </font>∈ R</nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:8px;font-family:Times"> <div style="position:absolute;top:3384;left:282"><nobr>¯N×N</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:3390;left:314"><nobr>.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:14px;font-family:Times"> <div style="position:absolute;top:3418;left:75"><nobr>3.2. Spatial-Shift MLP Architecture</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:3446;left:93"><nobr>As shown in Figure 1<a href="#2">,</a> our spatial-shift MLP back-</nobr></div> <div style="position:absolute;top:3464;left:75"><nobr>bone consists of a patch-wise fully-connected layer, N S<font style="font-size:8px">2</font>-</nobr></div> <div style="position:absolute;top:3482;left:75"><nobr>MLP blocks, and a fully-connected layer for classification.</nobr></div> <div style="position:absolute;top:3500;left:75"><nobr>The proposed spatial-shift operation is closely related to</nobr></div> <div style="position:absolute;top:3518;left:75"><nobr>Shift <a href="#10">[43</a>], 4-connected Shift [2] and TSM [26]. Our spatial-</nobr></div> <div style="position:absolute;top:3536;left:75"><nobr>shift operation can be regarded as a special version of 4-</nobr></div> <div style="position:absolute;top:3554;left:75"><nobr>Connected Shift without origin element information. Differ-</nobr></div> <div style="position:absolute;top:3572;left:75"><nobr>ent from the 4-connected shift residual block [2<a href="#9">] </a>in a fc-shift-</nobr></div> <div style="position:absolute;top:3589;left:75"><nobr>fc structure, our S<font style="font-size:8px">2</font>-MLP block, as visualized in Figure <a href="#2">1,</a></nobr></div> <div style="position:absolute;top:3607;left:75"><nobr>takes another two fully-connected layers only for mixing</nobr></div> <div style="position:absolute;top:2664;left:463"><nobr>channels after a fc-shift-fc structure. Besides, 4-connected</nobr></div> <div style="position:absolute;top:2682;left:463"><nobr>shift residual network exploits convolution in the early layer,</nobr></div> <div style="position:absolute;top:2700;left:463"><nobr>whereas ours adopts a pure-MLP structure.</nobr></div> <div style="position:absolute;top:2723;left:463"><nobr>Patch-wise fully-connected layer. We denote an image by</nobr></div> <div style="position:absolute;top:2741;left:463"><nobr>I ∈ R<font style="font-size:8px">W ×H×3</font>. It is uniformly split into w × h patches,</nobr></div> <div style="position:absolute;top:2758;left:463"><nobr>P = {y<font style="font-size:8px">i</font>}<font style="font-size:8px">wh</font></nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:8px;font-family:Times"> <div style="position:absolute;top:2766;left:529"><nobr>i=1<font style="font-size:12px">, where y</font>i <font style="font-size:12px">∈ R</font>p×p×3<font style="font-size:12px">, w = </font>W</nobr></div> <div style="position:absolute;top:2767;left:728"><nobr>p <font style="font-size:12px">, and h = </font>H</nobr></div> <div style="position:absolute;top:2767;left:806"><nobr>p <font style="font-size:12px">.</font></nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:2782;left:463"><nobr>For each patch y<font style="font-size:8px">i</font>, we unfold it into a vector p<font style="font-size:8px">i </font>∈ R<font style="font-size:8px">3p</font><font style="font-size:5px">2</font></nobr></div> <div style="position:absolute;top:2800;left:463"><nobr>and project it into an embedding vector e<font style="font-size:8px">i </font>through a fully-</nobr></div> <div style="position:absolute;top:2818;left:463"><nobr>connected layer followed by a layer normalization:</nobr></div> <div style="position:absolute;top:2851;left:569"><nobr>e<font style="font-size:8px">i </font>= LN(W<font style="font-size:8px">0</font>p<font style="font-size:8px">i </font>+ b<font style="font-size:8px">0</font>),</nobr></div> <div style="position:absolute;top:2851;left:800"><nobr>(3)</nobr></div> <div style="position:absolute;top:2886;left:463"><nobr>where W<font style="font-size:8px">0 </font>∈ R<font style="font-size:8px">c×3p</font><font style="font-size:5px">2</font></nobr></div> <div style="position:absolute;top:2886;left:600"><nobr>and b<font style="font-size:8px">0 </font>∈ R<font style="font-size:8px">c </font>are parameters of the</nobr></div> <div style="position:absolute;top:2904;left:463"><nobr>fully-connected layer and LN(·) is the layer normalization.</nobr></div> <div style="position:absolute;top:2927;left:463"><nobr>S<font style="font-size:8px">2</font>-MLP block. Our architecture stacks N S<font style="font-size:8px">2</font>-MLP of the</nobr></div> <div style="position:absolute;top:2945;left:463"><nobr>same size and structure. Each spatial-shift block contains</nobr></div> <div style="position:absolute;top:2963;left:463"><nobr>four fully-connected layers, two layer-normalization lay-</nobr></div> <div style="position:absolute;top:2981;left:463"><nobr>ers, two GELU layers, two skip-connections, and the pro-</nobr></div> <div style="position:absolute;top:2999;left:463"><nobr>posed spatial-shift module. It is worth noting that all fully-</nobr></div> <div style="position:absolute;top:3017;left:463"><nobr>connected layers used in our S<font style="font-size:8px">2</font>-MLP only serve to mix the</nobr></div> <div style="position:absolute;top:3035;left:463"><nobr>channels. We do not use the token-mixing MLP in MLP-</nobr></div> <div style="position:absolute;top:3053;left:463"><nobr>mixer. Since the fully-connected layer is well known, and</nobr></div> <div style="position:absolute;top:3071;left:463"><nobr>we have already introduced layer normalization and GELU</nobr></div> <div style="position:absolute;top:3089;left:463"><nobr>above, we only focus on the proposed spatial-shift module</nobr></div> <div style="position:absolute;top:3106;left:463"><nobr>here. We denote the feature map in the input of our spatial-</nobr></div> <div style="position:absolute;top:3124;left:463"><nobr>shift module by r ∈ R<font style="font-size:8px">w×h×c</font>, where w denotes the width,</nobr></div> <div style="position:absolute;top:3142;left:463"><nobr>h represents the height, and c is the number of channels. The</nobr></div> <div style="position:absolute;top:3160;left:463"><nobr>spatial-shift operation can be decomposed into two steps:</nobr></div> <div style="position:absolute;top:3178;left:462"><nobr>1) split the channels into several groups, and 2) shift each</nobr></div> <div style="position:absolute;top:3196;left:463"><nobr>group of channels in different directions.</nobr></div> <div style="position:absolute;top:3219;left:463"><nobr>Group. We uniformly split r along the channel dimen-</nobr></div> <div style="position:absolute;top:3237;left:463"><nobr>sion and obtain g thinner tensors {r <font style="font-size:8px">τ </font>}<font style="font-size:8px">g</font></nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:8px;font-family:Times"> <div style="position:absolute;top:3244;left:710"><nobr>τ=1 <font style="font-size:12px">where r </font>τ <font style="font-size:12px">∈</font></nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:3254;left:463"><nobr>R<font style="font-size:8px">w×h×c/g</font>. It is worth noting that the number of groups, g,</nobr></div> <div style="position:absolute;top:3273;left:463"><nobr>is dependent on the design of the shifting directions in the</nobr></div> <div style="position:absolute;top:3291;left:463"><nobr>second step. For instance, by default, we only shift along</nobr></div> <div style="position:absolute;top:3309;left:463"><nobr>four directions, and thus g is set as 4 in this configuration.</nobr></div> <div style="position:absolute;top:3332;left:463"><nobr>Spatial-shift operation. For the first group of channels,</nobr></div> <div style="position:absolute;top:3349;left:463"><nobr>r <font style="font-size:8px">1</font>, we shift it along the wide dimension by +1. In parallel,</nobr></div> <div style="position:absolute;top:3368;left:463"><nobr>we shift the second group of channels, r <font style="font-size:8px">2</font>, along the wide</nobr></div> <div style="position:absolute;top:3386;left:463"><nobr>dimension by −1. Similarly, we shift r <font style="font-size:8px">3 </font>along the height</nobr></div> <div style="position:absolute;top:3404;left:463"><nobr>dimension by +1 and r <font style="font-size:8px">4 </font>along the height dimension by −1.</nobr></div> <div style="position:absolute;top:3422;left:463"><nobr>We clarify the formulation of the spatial-shift operation in</nobr></div> <div style="position:absolute;top:3440;left:463"><nobr>Eq. <a href="#3">(4</a>) and demonstrate the pseudocode in Algorithm 1.</nobr></div> <div style="position:absolute;top:3471;left:533"><nobr>r <font style="font-size:8px">1</font>[1 : w, :, :] ← r <font style="font-size:8px">1</font>[0 : w − 1, :, :],</nobr></div> <div style="position:absolute;top:3494;left:533"><nobr>r <font style="font-size:8px">2</font>[0 : w − 1, :, :] ← r <font style="font-size:8px">2</font>[1 : w, :, :],</nobr></div> <div style="position:absolute;top:3516;left:533"><nobr>r <font style="font-size:8px">3</font>[:, 1 : h, :] ← r <font style="font-size:8px">3</font>[:, 0 : h − 1, :],</nobr></div> <div style="position:absolute;top:3538;left:533"><nobr>r <font style="font-size:8px">4</font>[:, 0 : h − 1, :] ← r <font style="font-size:8px">4</font>[:, 1 : h, :].</nobr></div> <div style="position:absolute;top:3506;left:800"><nobr>(4)</nobr></div> <div style="position:absolute;top:3572;left:481"><nobr>After spatially shifting, each patch absorbs the visual con-</nobr></div> <div style="position:absolute;top:3589;left:463"><nobr>tent from its adjoining patches. The spatial-shift operation</nobr></div> <div style="position:absolute;top:3607;left:463"><nobr>is parameter-free and makes the communication between</nobr></div> </span></font> <font size=3 color="#7f7f7f" face="Times"><span style="font-size:11px;font-family:Times;color:#7f7f7f"> <div style="position:absolute;top:3677;left:449"><nobr>299</nobr></div> </span></font> <a href="#9" style="border:1px solid #0000ff;position:absolute;top:2877;left:275;width:10;height:14;display:block"></a><a href="#9" style="border:1px solid #0000ff;position:absolute;top:2877;left:290;width:18;height:14;display:block"></a><a href="#10" style="border:1px solid #0000ff;position:absolute;top:2877;left:312;width:18;height:14;display:block"></a><a href="#9" style="border:1px solid #0000ff;position:absolute;top:3516;left:252;width:10;height:13;display:block"></a><a href="#9" style="border:1px solid #0000ff;position:absolute;top:3516;left:332;width:18;height:14;display:block"></a><a href="#4" style="border:1px solid #0000ff;position:absolute;top:3439;left:787;width:10;height:16;display:block"></a> <div style="position:absolute;top:3739;left:0"><hr><table border=0 width=100%><tr><td bgcolor=eeeeee align=right><font face=arial,sans-serif><a name=4><b>Page 4</b></a></font></td></tr></table></div><font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:3851;left:75"><nobr>Algorithm 1 Pseudocode of our spatial-shift operation.</nobr></div> <div style="position:absolute;top:3881;left:75"><nobr>def spatial_shift(x):</nobr></div> <div style="position:absolute;top:3896;left:88"><nobr>w,h,c = x.size()</nobr></div> <div style="position:absolute;top:3910;left:88"><nobr>x[1:,:,:c/4] = x[:w-1,:,:c/4]</nobr></div> <div style="position:absolute;top:3924;left:88"><nobr>x[:w-1,:,c/4:c/2] = x[1:,:,c/4:c/2]</nobr></div> <div style="position:absolute;top:3938;left:88"><nobr>x[:,1:,c/2:c*3/4] = x[:,:h-1,c/2:c*3/4]</nobr></div> <div style="position:absolute;top:3952;left:88"><nobr>x[:,:h-1,3*c/4:] = x[:,1:,3*c/4:]</nobr></div> <div style="position:absolute;top:3967;left:88"><nobr>return x</nobr></div> <div style="position:absolute;top:4014;left:75"><nobr>different spatial locations feasible. The above mentioned</nobr></div> <div style="position:absolute;top:4032;left:75"><nobr>spatial-shift manner is one of most straightforward ways for</nobr></div> <div style="position:absolute;top:4050;left:75"><nobr>shifting. We also evaluate other manners. Surprisingly, the</nobr></div> <div style="position:absolute;top:4068;left:75"><nobr>above simple manner has achieved excellent performance.</nobr></div> <div style="position:absolute;top:4086;left:75"><nobr>Using the spatial-shift operation, we no longer need token-</nobr></div> <div style="position:absolute;top:4104;left:75"><nobr>mixer as MLP-mixer. We only need channel-mixer to project</nobr></div> <div style="position:absolute;top:4122;left:75"><nobr>the patch-wise feature along the channel dimension. Note</nobr></div> <div style="position:absolute;top:4139;left:75"><nobr>that the spatial-shift operation in a single block is only able</nobr></div> <div style="position:absolute;top:4157;left:75"><nobr>to gain the visual content from adjacent patches and cannot</nobr></div> <div style="position:absolute;top:4175;left:75"><nobr>have access to visual content of all patches in the image. But</nobr></div> <div style="position:absolute;top:4193;left:75"><nobr>we stack N S<font style="font-size:8px">2</font>-MLP blocks, the global visual content will</nobr></div> <div style="position:absolute;top:4211;left:75"><nobr>be gradually diffused to every patch.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:14px;font-family:Times"> <div style="position:absolute;top:4240;left:75"><nobr>3.3. Relations with depthwise convolution</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:4268;left:75"><nobr>Depthwise convolution. Given a feature map defined as a</nobr></div> <div style="position:absolute;top:4286;left:75"><nobr>tensor T ∈ R<font style="font-size:8px">w×h×c</font>, depthwise convolution [5<a href="#9">, </a>19, 20] uti-</nobr></div> <div style="position:absolute;top:4304;left:75"><nobr>lize a two dimensional convolution kernel K<font style="font-size:8px">i </font>separably on</nobr></div> <div style="position:absolute;top:4322;left:75"><nobr>each two-dimensional slice of the tensor T [:, :,i] ∈ R<font style="font-size:8px">w×h</font></nobr></div> <div style="position:absolute;top:4340;left:75"><nobr>where i ∈ [1,c]. Depthwise convolution takes cheap com-</nobr></div> <div style="position:absolute;top:4358;left:75"><nobr>putational cost and thus is widely used in efficient neural</nobr></div> <div style="position:absolute;top:4376;left:75"><nobr>network for fast inference.</nobr></div> <div style="position:absolute;top:4399;left:75"><nobr>Relations. The proposed spatial shift is inspired by the</nobr></div> <div style="position:absolute;top:4417;left:75"><nobr>temporal shift proposed in TSM [26<a href="#9">]. </a>It is originally utilized</nobr></div> <div style="position:absolute;top:4435;left:75"><nobr>for modeling the temporal relations between adjacent frames,</nobr></div> <div style="position:absolute;top:4453;left:75"><nobr>which shifts the channels along the temporal dimension.</nobr></div> <div style="position:absolute;top:4471;left:75"><nobr>In this work, we extend it to the two-dimensional spatial</nobr></div> <div style="position:absolute;top:4488;left:75"><nobr>scenario. In fact, the spatial-shift operation is equal to a</nobr></div> <div style="position:absolute;top:4506;left:75"><nobr>depthwise convolution with a fixed and group-specific kernel</nobr></div> <div style="position:absolute;top:4524;left:75"><nobr>weights. Let denote a set of depthwise convolution kernels</nobr></div> <div style="position:absolute;top:4542;left:75"><nobr>as K = {K<font style="font-size:8px">1</font>, ··· , K<font style="font-size:8px">c</font>}. If we set</nobr></div> <div style="position:absolute;top:4593;left:144"><nobr>K<font style="font-size:8px">i </font>=</nobr></div> <div style="position:absolute;top:4572;left:182"><nobr></nobr></div> <div style="position:absolute;top:4599;left:182"><nobr></nobr></div> <div style="position:absolute;top:4575;left:192"><nobr>0 0 0</nobr></div> <div style="position:absolute;top:4592;left:192"><nobr>1 0 0</nobr></div> <div style="position:absolute;top:4610;left:192"><nobr>0 0 0</nobr></div> <div style="position:absolute;top:4572;left:245"><nobr></nobr></div> <div style="position:absolute;top:4599;left:245"><nobr> , ∀i ∈ (0,</nobr></div> <div style="position:absolute;top:4583;left:333"><nobr>c</nobr></div> <div style="position:absolute;top:4603;left:332"><nobr>4</nobr></div> <div style="position:absolute;top:4593;left:342"><nobr>],</nobr></div> <div style="position:absolute;top:4652;left:143"><nobr>K<font style="font-size:8px">j </font>=</nobr></div> <div style="position:absolute;top:4632;left:182"><nobr></nobr></div> <div style="position:absolute;top:4659;left:182"><nobr></nobr></div> <div style="position:absolute;top:4634;left:192"><nobr>0 0 0</nobr></div> <div style="position:absolute;top:4652;left:192"><nobr>0 0 1</nobr></div> <div style="position:absolute;top:4670;left:192"><nobr>0 0 0</nobr></div> <div style="position:absolute;top:4632;left:245"><nobr></nobr></div> <div style="position:absolute;top:4659;left:245"><nobr> , ∀j ∈ (</nobr></div> <div style="position:absolute;top:4642;left:320"><nobr>c</nobr></div> <div style="position:absolute;top:4663;left:320"><nobr>4</nobr></div> <div style="position:absolute;top:4652;left:329"><nobr>,</nobr></div> <div style="position:absolute;top:4642;left:338"><nobr>c</nobr></div> <div style="position:absolute;top:4663;left:338"><nobr>2</nobr></div> <div style="position:absolute;top:4652;left:347"><nobr>],</nobr></div> <div style="position:absolute;top:4712;left:142"><nobr>K<font style="font-size:8px">k </font>=</nobr></div> <div style="position:absolute;top:4692;left:182"><nobr></nobr></div> <div style="position:absolute;top:4719;left:182"><nobr></nobr></div> <div style="position:absolute;top:4694;left:192"><nobr>0 1 0</nobr></div> <div style="position:absolute;top:4712;left:192"><nobr>0 0 0</nobr></div> <div style="position:absolute;top:4730;left:192"><nobr>0 0 0</nobr></div> <div style="position:absolute;top:4692;left:245"><nobr></nobr></div> <div style="position:absolute;top:4719;left:245"><nobr> , ∀k ∈ (</nobr></div> <div style="position:absolute;top:4702;left:322"><nobr>c</nobr></div> <div style="position:absolute;top:4722;left:321"><nobr>2</nobr></div> <div style="position:absolute;top:4712;left:330"><nobr>,</nobr></div> <div style="position:absolute;top:4702;left:339"><nobr>3c</nobr></div> <div style="position:absolute;top:4722;left:342"><nobr>4</nobr></div> <div style="position:absolute;top:4712;left:355"><nobr>],</nobr></div> <div style="position:absolute;top:4772;left:145"><nobr>K<font style="font-size:8px">l </font>=</nobr></div> <div style="position:absolute;top:4752;left:182"><nobr></nobr></div> <div style="position:absolute;top:4779;left:182"><nobr></nobr></div> <div style="position:absolute;top:4754;left:192"><nobr>0 0 0</nobr></div> <div style="position:absolute;top:4772;left:192"><nobr>0 0 0</nobr></div> <div style="position:absolute;top:4790;left:192"><nobr>0 1 0</nobr></div> <div style="position:absolute;top:4752;left:245"><nobr></nobr></div> <div style="position:absolute;top:4779;left:245"><nobr> , ∀l ∈ (</nobr></div> <div style="position:absolute;top:4762;left:318"><nobr>3c</nobr></div> <div style="position:absolute;top:4782;left:321"><nobr>4</nobr></div> <div style="position:absolute;top:4772;left:333"><nobr>,c],</nobr></div> <div style="position:absolute;top:3852;left:463"><nobr>the depthwise convolution based on the group of kernels K</nobr></div> <div style="position:absolute;top:3870;left:463"><nobr>is equivalent to our spatial-shift operation.</nobr></div> <div style="position:absolute;top:3888;left:481"><nobr>That is, our spatial-shift operation is a variant of depth-</nobr></div> <div style="position:absolute;top:3906;left:463"><nobr>wise convolution with the fixed weights defined above.</nobr></div> <div style="position:absolute;top:3924;left:463"><nobr>Meanwhile, the spatial-shift operation shares kernel weights</nobr></div> <div style="position:absolute;top:3941;left:463"><nobr>within each group of channels. As we mentioned in the</nobr></div> <div style="position:absolute;top:3959;left:463"><nobr>introduction, token-mixing MLP in MLP-mixer is a global-</nobr></div> <div style="position:absolute;top:3977;left:463"><nobr>reception and spatial-specific depthwise convolution. Mean-</nobr></div> <div style="position:absolute;top:3995;left:463"><nobr>while, compared with our spatial shift and vanilla depthwise</nobr></div> <div style="position:absolute;top:4013;left:463"><nobr>convolution, the weights for summation in token-mixing are</nobr></div> <div style="position:absolute;top:4031;left:463"><nobr>shared cross channels for a specific spatial location. In con-</nobr></div> <div style="position:absolute;top:4049;left:463"><nobr>trast, the vanilla depthwise convolution learns different con-</nobr></div> <div style="position:absolute;top:4067;left:463"><nobr>volution kernels for different channels, and our spatial-shift</nobr></div> <div style="position:absolute;top:4085;left:463"><nobr>operation shares the weights within the group and adopts</nobr></div> <div style="position:absolute;top:4103;left:463"><nobr>different weights for different groups. In other words, both</nobr></div> <div style="position:absolute;top:4121;left:463"><nobr>our spatial-shift operation and token-mixing MLP in MLP-</nobr></div> <div style="position:absolute;top:4139;left:463"><nobr>mixer are variants of depthwise convolution. We summarize</nobr></div> <div style="position:absolute;top:4157;left:463"><nobr>their relations and differences in Table <a href="#4">1.</a></nobr></div> <div style="position:absolute;top:4197;left:518"><nobr>wights</nobr></div> <div style="position:absolute;top:4188;left:578"><nobr>reception</nobr></div> <div style="position:absolute;top:4206;left:593"><nobr>field</nobr></div> <div style="position:absolute;top:4197;left:657"><nobr>spatial</nobr></div> <div style="position:absolute;top:4197;left:739"><nobr>channel</nobr></div> <div style="position:absolute;top:4224;left:476"><nobr>TM learned</nobr></div> <div style="position:absolute;top:4224;left:587"><nobr>global</nobr></div> <div style="position:absolute;top:4224;left:654"><nobr>specific</nobr></div> <div style="position:absolute;top:4224;left:737"><nobr>agnostic</nobr></div> <div style="position:absolute;top:4242;left:480"><nobr>S<font style="font-size:8px">2</font></nobr></div> <div style="position:absolute;top:4242;left:523"><nobr>fixed</nobr></div> <div style="position:absolute;top:4242;left:592"><nobr>local</nobr></div> <div style="position:absolute;top:4242;left:652"><nobr>agnostic group-specific</nobr></div> <div style="position:absolute;top:4260;left:477"><nobr>DC</nobr></div> <div style="position:absolute;top:4260;left:516"><nobr>learned</nobr></div> <div style="position:absolute;top:4260;left:592"><nobr>local</nobr></div> <div style="position:absolute;top:4260;left:652"><nobr>agnostic</nobr></div> <div style="position:absolute;top:4260;left:739"><nobr>specific</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:11px;font-family:Times"> <div style="position:absolute;top:4282;left:463"><nobr>Table 1. Relations among token-mixing (TM), spatial-shift (S<font style="font-size:6px">2</font>) and</nobr></div> <div style="position:absolute;top:4298;left:463"><nobr>depthwise convolution (DC).</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:14px;font-family:Times"> <div style="position:absolute;top:4323;left:463"><nobr>3.4. Complexity Analysis</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:4351;left:463"><nobr>Patch-wise fully-connected layer (PFL) projects each</nobr></div> <div style="position:absolute;top:4369;left:463"><nobr>cropped patch, y ∈ R<font style="font-size:8px">p×p×3</font>, into a c-dimensional vector.</nobr></div> <div style="position:absolute;top:4388;left:463"><nobr>The weights of PFL satisfy W<font style="font-size:8px">0 </font>∈ R<font style="font-size:8px">c×3p</font><font style="font-size:5px">2</font></nobr></div> <div style="position:absolute;top:4388;left:734"><nobr>and b<font style="font-size:8px">0 </font>∈ R<font style="font-size:8px">c</font>.</nobr></div> <div style="position:absolute;top:4406;left:463"><nobr>Thus, the number of parameters in PFL is</nobr></div> <div style="position:absolute;top:4436;left:559"><nobr>Params<font style="font-size:8px">PFL </font>= (3p<font style="font-size:8px">2 </font>+ 1)c.</nobr></div> <div style="position:absolute;top:4436;left:800"><nobr>(5)</nobr></div> <div style="position:absolute;top:4466;left:463"><nobr>The total number of patches is M = w×h = <font style="font-size:8px">W</font></nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:8px;font-family:Times"> <div style="position:absolute;top:4474;left:738"><nobr>p</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:4465;left:751"><nobr>× <font style="font-size:8px">H</font></nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:8px;font-family:Times"> <div style="position:absolute;top:4474;left:768"><nobr>p <font style="font-size:12px">where</font></nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:4484;left:463"><nobr>W is the width and H is the height of the input image. In</nobr></div> <div style="position:absolute;top:4502;left:463"><nobr>this case, the floating operations (FLOPs) in PFL is</nobr></div> <div style="position:absolute;top:4532;left:571"><nobr>FLOPs<font style="font-size:8px">PFL </font>= 3Mcp<font style="font-size:8px">2</font>.</nobr></div> <div style="position:absolute;top:4532;left:800"><nobr>(6)</nobr></div> <div style="position:absolute;top:4563;left:463"><nobr>It is worth noting that, following previous works [<a href="#10">38,</a> 13],</nobr></div> <div style="position:absolute;top:4580;left:463"><nobr>we only consider the multiplication operation between float</nobr></div> <div style="position:absolute;top:4598;left:463"><nobr>numbers when counting FLOPs.</nobr></div> <div style="position:absolute;top:4622;left:463"><nobr>S<font style="font-size:8px">2</font>-MLP blocks. Our S<font style="font-size:8px">2</font>-MLP architecture consists of N</nobr></div> <div style="position:absolute;top:4640;left:463"><nobr>S<font style="font-size:8px">2</font>-MLP blocks. The input and output of all blocks are of</nobr></div> <div style="position:absolute;top:4658;left:463"><nobr>the same size. We denote the input of the i-th S<font style="font-size:8px">2</font>-MLP block</nobr></div> <div style="position:absolute;top:4678;left:463"><nobr>by a tensor r</nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:8px;font-family:Times"> <div style="position:absolute;top:4673;left:545"><nobr>(i)</nobr></div> <div style="position:absolute;top:4685;left:545"><nobr>in <font style="font-size:12px">and the output by r</font></nobr></div> <div style="position:absolute;top:4673;left:683"><nobr>(i)</nobr></div> <div style="position:absolute;top:4684;left:683"><nobr>out<font style="font-size:12px">. Then, they satisfy</font></nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:4710;left:526"><nobr>r <font style="font-size:8px">(i)</font></nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:8px;font-family:Times"> <div style="position:absolute;top:4719;left:539"><nobr>in <font style="font-size:12px">, r</font></nobr></div> <div style="position:absolute;top:4707;left:574"><nobr>(i)</nobr></div> <div style="position:absolute;top:4718;left:574"><nobr>out <font style="font-size:12px">∈ R</font>w×h×c<font style="font-size:12px">, ∀i ∈ [1,N].</font></nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:4711;left:800"><nobr>(7)</nobr></div> <div style="position:absolute;top:4742;left:463"><nobr>Meanwhile, all S<font style="font-size:8px">2</font>-MLP blocks take the same operation and</nobr></div> <div style="position:absolute;top:4760;left:463"><nobr>are of the same configuration. This leads to the fact that all</nobr></div> <div style="position:absolute;top:4777;left:463"><nobr>blocks take the same computational cost and the same num-</nobr></div> <div style="position:absolute;top:4795;left:463"><nobr>ber of parameters. To obtain the total number of parameters</nobr></div> </span></font> <font size=3 color="#7f7f7f" face="Times"><span style="font-size:11px;font-family:Times;color:#7f7f7f"> <div style="position:absolute;top:4865;left:449"><nobr>300</nobr></div> </span></font> <a href="#9" style="border:1px solid #0000ff;position:absolute;top:4284;left:363;width:18;height:14;display:block"></a><a href="#9" style="border:1px solid #0000ff;position:absolute;top:4284;left:386;width:18;height:14;display:block"></a><a href="#9" style="border:1px solid #0000ff;position:absolute;top:4561;left:794;width:18;height:13;display:block"></a> <div style="position:absolute;top:4927;left:0"><hr><table border=0 width=100%><tr><td bgcolor=eeeeee align=right><font face=arial,sans-serif><a name=5><b>Page 5</b></a></font></td></tr></table></div><font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:5040;left:75"><nobr>and FLOPs of the proposed S<font style="font-size:8px">2</font>-MLP architecture, we only</nobr></div> <div style="position:absolute;top:5058;left:75"><nobr>need count that for each basic block.</nobr></div> <div style="position:absolute;top:5076;left:93"><nobr>Only fully-connected layers contain parameters. As</nobr></div> <div style="position:absolute;top:5094;left:75"><nobr>shown in Figure <a href="#2">1,</a> S<font style="font-size:8px">2</font>-MLP contains four fully-connected</nobr></div> <div style="position:absolute;top:5112;left:75"><nobr>layers. We denote the weights of the first two fully-connected</nobr></div> <div style="position:absolute;top:5130;left:75"><nobr>layer as {W<font style="font-size:8px">1</font>, b<font style="font-size:8px">1</font>} and {W<font style="font-size:8px">2</font>, b<font style="font-size:8px">2</font>} where W<font style="font-size:8px">1 </font>∈ R<font style="font-size:8px">c×c </font>and</nobr></div> <div style="position:absolute;top:5147;left:74"><nobr>W<font style="font-size:8px">2 </font>∈ R<font style="font-size:8px">c×c</font>. These two fully-connected layer keep the fea-</nobr></div> <div style="position:absolute;top:5165;left:75"><nobr>ture dimension unchanged. We denote the weights of the</nobr></div> <div style="position:absolute;top:5183;left:75"><nobr>third fully-connected layer as {W<font style="font-size:8px">3</font>, b<font style="font-size:8px">3</font>} where W<font style="font-size:8px">3 </font>∈ R<font style="font-size:8px">¯c×c</font></nobr></div> <div style="position:absolute;top:5201;left:75"><nobr>and b<font style="font-size:8px">3 </font>∈ R<font style="font-size:8px">¯c</font>. ¯c denotes the hidden size. Following ViT</nobr></div> <div style="position:absolute;top:5219;left:75"><nobr>and MLP-mixer, we set ¯c = rc where r is the expansion</nobr></div> <div style="position:absolute;top:5237;left:75"><nobr>ratio which is set as 4, by default. In this step, the feature</nobr></div> <div style="position:absolute;top:5255;left:75"><nobr>dimension of each patch increases from c to ¯c. In contrast,</nobr></div> <div style="position:absolute;top:5273;left:75"><nobr>the fourth fully-connected layer reduces the dimension of</nobr></div> <div style="position:absolute;top:5291;left:75"><nobr>each patch from ¯c back to c. Its weights W<font style="font-size:8px">4 </font>∈ R<font style="font-size:8px">cׯc </font>and</nobr></div> <div style="position:absolute;top:5309;left:75"><nobr>b<font style="font-size:8px">4 </font>∈ R<font style="font-size:8px">c</font>. The number of parameters per S<font style="font-size:8px">2</font>-MLP block is</nobr></div> <div style="position:absolute;top:5327;left:75"><nobr>the total number of entries in {W<font style="font-size:8px">i</font>, b<font style="font-size:8px">i</font>}<font style="font-size:8px">4</font></nobr></div> </span></font> <font size=2 face="Times"><span style="font-size:8px;font-family:Times"> <div style="position:absolute;top:5334;left:311"><nobr>i=1 <font style="font-size:12px">is</font></nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:5360;left:83"><nobr>Params<font style="font-size:8px">S</font><font style="font-size:5px">2 </font>= c(2c+2¯c)+3c+¯c = c<font style="font-size:8px">2</font>(2r+2)+c(3+r). (8)</nobr></div> <div style="position:absolute;top:5393;left:75"><nobr>Meanwhile, the total FLOPs of fully-connected layers in</nobr></div> <div style="position:absolute;top:5411;left:75"><nobr>each S<font style="font-size:8px">2</font>-MLP block is</nobr></div> <div style="position:absolute;top:5444;left:116"><nobr>FLOPs<font style="font-size:8px">S</font><font style="font-size:5px">2 </font>= M(2c<font style="font-size:8px">2 </font>+ 2c¯c) = Mc<font style="font-size:8px">2</font>(2r + 2).</nobr></div> <div style="position:absolute;top:5444;left:412"><nobr>(9)</nobr></div> <div style="position:absolute;top:5477;left:75"><nobr>Fully-connected classification layer (FCL) takes input the</nobr></div> <div style="position:absolute;top:5495;left:75"><nobr>c-dimensional vector from average-pooling M patch fea-</nobr></div> <div style="position:absolute;top:5513;left:75"><nobr>tures in the output of the last S<font style="font-size:8px">2</font>-MLP block. It outputs</nobr></div> <div style="position:absolute;top:5530;left:75"><nobr>k-dimensional score vector where k is the number of classes.</nobr></div> <div style="position:absolute;top:5549;left:75"><nobr>Hence, the number of parameters in FCL is</nobr></div> <div style="position:absolute;top:5581;left:178"><nobr>Params<font style="font-size:8px">FCL </font>= (c + 1)k.</nobr></div> <div style="position:absolute;top:5582;left:405"><nobr>(10)</nobr></div> <div style="position:absolute;top:5615;left:75"><nobr>Meanwhile, the FLOPs of FCL is</nobr></div> <div style="position:absolute;top:5647;left:189"><nobr>FLOPs<font style="font-size:8px">FCL </font>= Mck.</nobr></div> <div style="position:absolute;top:5648;left:405"><nobr>(11)</nobr></div> <div style="position:absolute;top:5681;left:75"><nobr>By adding up the number of parameters in the patch-wise</nobr></div> <div style="position:absolute;top:5699;left:75"><nobr>fully-connected layer, N S<font style="font-size:8px">2</font>-MLP blocks, and the fully-</nobr></div> <div style="position:absolute;top:5717;left:75"><nobr>connected classification layer, we obtain the total number of</nobr></div> <div style="position:absolute;top:5735;left:75"><nobr>parameters of thw whole architecture:</nobr></div> <div style="position:absolute;top:5767;left:80"><nobr>Params = Params<font style="font-size:8px">PFL </font>+ N ∗ Params<font style="font-size:8px">S</font><font style="font-size:5px">2 </font>+ Params<font style="font-size:8px">FCL</font>.</nobr></div> <div style="position:absolute;top:5801;left:75"><nobr>And the total number of FLOPs is</nobr></div> <div style="position:absolute;top:5833;left:84"><nobr>FLOPs = FLOPs<font style="font-size:8px">PFL </font>+ N ∗ FLOPs<font style="font-size:8px">S</font><font style="font-size:5px">2 </font>+ FLOPs<font style="font-size:8px">FCL</font>.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:14px;font-family:Times"> <div style="position:absolute;top:5866;left:75"><nobr>3.5. Implementation</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:5894;left:93"><nobr>We set the cropped patch size (p × p) as 16 × 16. We</nobr></div> <div style="position:absolute;top:5912;left:75"><nobr>reshape input image into the 224 × 224 size. Thus, the num-</nobr></div> <div style="position:absolute;top:5930;left:75"><nobr>ber of patches M = (224/16)<font style="font-size:8px">2 </font>= 196. We set expansion</nobr></div> <div style="position:absolute;top:5948;left:75"><nobr>ratio r = 4. We attempt two types of settings: 1) wide</nobr></div> <div style="position:absolute;top:5965;left:75"><nobr>settings and 2) deep settings. The wide settings follow the</nobr></div> <div style="position:absolute;top:5983;left:75"><nobr>base model of MLP-mixer [36<a href="#10">].</a> The wide settings set the</nobr></div> <div style="position:absolute;top:5040;left:463"><nobr>number of S<font style="font-size:8px">2</font>-MLP blocks (N) as 12 and the hidden size c as</nobr></div> <div style="position:absolute;top:5058;left:463"><nobr>768. Note that MLP-mixer also implements the large model</nobr></div> <div style="position:absolute;top:5076;left:463"><nobr>and the huge model. Nevertheless, our limited computing</nobr></div> <div style="position:absolute;top:5094;left:463"><nobr>resources cannot afford the expensive cost of investigating</nobr></div> <div style="position:absolute;top:5112;left:463"><nobr>the large and huge models on ImageNet-1K dataset. The</nobr></div> <div style="position:absolute;top:5129;left:463"><nobr>deep settings follow ResMLP-36 [37<a href="#10">].</a> The deep settings set</nobr></div> <div style="position:absolute;top:5147;left:463"><nobr>the number of S<font style="font-size:8px">2</font>-MLP blocks (N) as 36 and the hidden size</nobr></div> <div style="position:absolute;top:5165;left:463"><nobr>c as 384. We summarize the hyperparameters, the number</nobr></div> <div style="position:absolute;top:5183;left:463"><nobr>of parameters, and FLOPs of two settings in Table <a href="#5">2.</a></nobr></div> <div style="position:absolute;top:5220;left:474"><nobr>Settings</nobr></div> <div style="position:absolute;top:5220;left:544"><nobr>M</nobr></div> <div style="position:absolute;top:5220;left:582"><nobr>N</nobr></div> <div style="position:absolute;top:5220;left:622"><nobr>c</nobr></div> <div style="position:absolute;top:5220;left:654"><nobr>r</nobr></div> <div style="position:absolute;top:5220;left:683"><nobr>p</nobr></div> <div style="position:absolute;top:5220;left:712"><nobr>Para. FLOPs</nobr></div> <div style="position:absolute;top:5238;left:484"><nobr>wide</nobr></div> <div style="position:absolute;top:5238;left:541"><nobr>196 12 768 4 16 71M</nobr></div> <div style="position:absolute;top:5238;left:769"><nobr>14B</nobr></div> <div style="position:absolute;top:5256;left:484"><nobr>deep</nobr></div> <div style="position:absolute;top:5256;left:541"><nobr>196 36 384 4 16 51M</nobr></div> <div style="position:absolute;top:5256;left:763"><nobr>10.5B</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:11px;font-family:Times"> <div style="position:absolute;top:5286;left:463"><nobr>Table 2. The hyper-parameters, the number of parameters and</nobr></div> <div style="position:absolute;top:5303;left:463"><nobr>FLOPs. Following MLP-Mixer [36<a href="#10">],</a> the number of parameters</nobr></div> <div style="position:absolute;top:5319;left:463"><nobr>excludes the weights of the fully-connected layer for classification.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:15px;font-family:Times"> <div style="position:absolute;top:5350;left:463"><nobr>4. Experiments</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:5381;left:463"><nobr>Datasets. We evaluate the performance of S<font style="font-size:8px">2</font>-MLP on the</nobr></div> <div style="position:absolute;top:5399;left:463"><nobr>widely used benchmark, ImageNet-1K [8]<a href="#9">. </a>It consists of 1.2</nobr></div> <div style="position:absolute;top:5417;left:463"><nobr>million training images from one thousand categories and</nobr></div> <div style="position:absolute;top:5435;left:463"><nobr>50 thousand validation images with 50 images in each cate-</nobr></div> <div style="position:absolute;top:5453;left:463"><nobr>gory. Meanwhile, due to limited computing resources, the</nobr></div> <div style="position:absolute;top:5471;left:463"><nobr>ablation study is only conducted on its subset ImageNet100.</nobr></div> <div style="position:absolute;top:5489;left:463"><nobr>It only contains images of randomly selected 100 categories.</nobr></div> <div style="position:absolute;top:5507;left:463"><nobr>ImageNet100 contains 0.1 million training images and 5</nobr></div> <div style="position:absolute;top:5525;left:463"><nobr>thousand images for validation.</nobr></div> <div style="position:absolute;top:5547;left:463"><nobr>Training details. We adopt the training strategy provided</nobr></div> <div style="position:absolute;top:5566;left:463"><nobr>by DeiT [<a href="#10">38]</a>. To be specific, we train our model using</nobr></div> <div style="position:absolute;top:5584;left:463"><nobr>AdamW [29<a href="#9">] </a>with weight decay 0.05 and a batch size of</nobr></div> <div style="position:absolute;top:5601;left:463"><nobr>1024. We use a linear warmup and cosine decay. The initial</nobr></div> <div style="position:absolute;top:5619;left:463"><nobr>learning rate is 1e-3 and gradually drops to 1e-5 in 300</nobr></div> <div style="position:absolute;top:5637;left:463"><nobr>epochs. We also use label smoothing [<a href="#10">34]</a>, DropPath [24],</nobr></div> <div style="position:absolute;top:5655;left:463"><nobr>and repeated augmentation [17<a href="#9">].</a> All training is conducted on</nobr></div> <div style="position:absolute;top:5673;left:463"><nobr>a Linux server equipped with four NVIDIA Tesla V100 GPU</nobr></div> <div style="position:absolute;top:5691;left:463"><nobr>cards. The whole training process of the proposed S<font style="font-size:8px">2</font>-MLP</nobr></div> <div style="position:absolute;top:5709;left:463"><nobr>on ImageNet-1K dataset takes around 4.5 days. S<font style="font-size:8px">2</font>-MLP is</nobr></div> <div style="position:absolute;top:5727;left:463"><nobr>implemented in PaddlePaddle platform developed by Baidu.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:14px;font-family:Times"> <div style="position:absolute;top:5757;left:463"><nobr>4.1. Main results</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:5786;left:481"><nobr>The main results are summarized in Table<a href="#6"> 3</a>. As shown</nobr></div> <div style="position:absolute;top:5804;left:463"><nobr>in the table, compared with ViT <a href="#9">[9</a>], Mixer-B/16 is not com-</nobr></div> <div style="position:absolute;top:5822;left:463"><nobr>petitive in terms of accuracy. In contrast, the proposed S<font style="font-size:8px">2</font>-</nobr></div> <div style="position:absolute;top:5840;left:463"><nobr>MLP has obtained a comparable accuracy with respect to</nobr></div> <div style="position:absolute;top:5858;left:463"><nobr>ViT. Meanwhile, Mixer-B/16 and our S<font style="font-size:8px">2</font>-MLP take consid-</nobr></div> <div style="position:absolute;top:5876;left:463"><nobr>erably fewer parameters and FLOPs, making them more</nobr></div> <div style="position:absolute;top:5894;left:463"><nobr>attractive compared with ViT when efficiency is important.</nobr></div> <div style="position:absolute;top:5912;left:463"><nobr>Meanwhile, we note that, by introducing some hard-crafted</nobr></div> <div style="position:absolute;top:5930;left:463"><nobr>design, following Transformer-based works such as PVT-</nobr></div> <div style="position:absolute;top:5948;left:463"><nobr>Large <a href="#10">[42</a>], TNT-B [13], T2T-ViT<font style="font-size:8px">t</font>-24 <a href="#10">[46</a>], CaiT [40], Swin-</nobr></div> <div style="position:absolute;top:5965;left:463"><nobr>B [<a href="#9">28]</a>, and Nest-B [48] have considerably improved ViT.</nobr></div> <div style="position:absolute;top:5983;left:463"><nobr>MLP-based models including the proposed S<font style="font-size:8px">2</font>-MLP cannot</nobr></div> </span></font> <font size=3 color="#7f7f7f" face="Times"><span style="font-size:11px;font-family:Times;color:#7f7f7f"> <div style="position:absolute;top:6053;left:449"><nobr>301</nobr></div> </span></font> <a href="#9" style="border:1px solid #0000ff;position:absolute;top:5636;left:794;width:18;height:13;display:block"></a><a href="#9" style="border:1px solid #0000ff;position:absolute;top:5946;left:581;width:18;height:13;display:block"></a><a href="#10" style="border:1px solid #0000ff;position:absolute;top:5946;left:757;width:18;height:13;display:block"></a><a href="#10" style="border:1px solid #0000ff;position:absolute;top:5964;left:590;width:18;height:13;display:block"></a> <div style="position:absolute;top:6115;left:0"><hr><table border=0 width=100%><tr><td bgcolor=eeeeee align=right><font face=arial,sans-serif><a name=6><b>Page 6</b></a></font></td></tr></table></div><font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:6227;left:219"><nobr>Model</nobr></div> <div style="position:absolute;top:6227;left:319"><nobr>Resolution Top-1 (%) Top5 (%) Params (M) FLOPs (B)</nobr></div> <div style="position:absolute;top:6247;left:409"><nobr>CNN-based</nobr></div> <div style="position:absolute;top:6265;left:195"><nobr>ResNet50 [<a href="#9">14]</a></nobr></div> <div style="position:absolute;top:6265;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6265;left:420"><nobr>76.2</nobr></div> <div style="position:absolute;top:6265;left:497"><nobr>92.9</nobr></div> <div style="position:absolute;top:6265;left:580"><nobr>25.6</nobr></div> <div style="position:absolute;top:6265;left:670"><nobr>4.1</nobr></div> <div style="position:absolute;top:6283;left:191"><nobr>ResNet152 [<a href="#9">14]</a></nobr></div> <div style="position:absolute;top:6283;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6283;left:420"><nobr>78.3</nobr></div> <div style="position:absolute;top:6283;left:497"><nobr>94.1</nobr></div> <div style="position:absolute;top:6283;left:580"><nobr>60.2</nobr></div> <div style="position:absolute;top:6283;left:666"><nobr>11.5</nobr></div> <div style="position:absolute;top:6301;left:181"><nobr>RegNetY-8GF [<a href="#10">31</a>]</nobr></div> <div style="position:absolute;top:6301;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6301;left:420"><nobr>79.0</nobr></div> <div style="position:absolute;top:6300;left:505"><nobr>−</nobr></div> <div style="position:absolute;top:6301;left:580"><nobr>39.2</nobr></div> <div style="position:absolute;top:6301;left:670"><nobr>8.0</nobr></div> <div style="position:absolute;top:6319;left:177"><nobr>RegNetY-16GF [<a href="#10">31]</a></nobr></div> <div style="position:absolute;top:6319;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6319;left:420"><nobr>80.4</nobr></div> <div style="position:absolute;top:6318;left:505"><nobr>−</nobr></div> <div style="position:absolute;top:6319;left:580"><nobr>83.6</nobr></div> <div style="position:absolute;top:6319;left:666"><nobr>15.9</nobr></div> <div style="position:absolute;top:6337;left:177"><nobr>EfficientNet-B3 [<a href="#10">35</a>]</nobr></div> <div style="position:absolute;top:6337;left:320"><nobr>300 × 300</nobr></div> <div style="position:absolute;top:6337;left:420"><nobr>81.6</nobr></div> <div style="position:absolute;top:6337;left:497"><nobr>95.7</nobr></div> <div style="position:absolute;top:6337;left:585"><nobr>12</nobr></div> <div style="position:absolute;top:6337;left:670"><nobr>1.8</nobr></div> <div style="position:absolute;top:6355;left:177"><nobr>EfficientNet-B5 [<a href="#10">35</a>]</nobr></div> <div style="position:absolute;top:6355;left:320"><nobr>456 × 456</nobr></div> <div style="position:absolute;top:6355;left:420"><nobr>84.0</nobr></div> <div style="position:absolute;top:6355;left:497"><nobr>96.8</nobr></div> <div style="position:absolute;top:6355;left:585"><nobr>30</nobr></div> <div style="position:absolute;top:6355;left:670"><nobr>9.9</nobr></div> <div style="position:absolute;top:6375;left:388"><nobr>Transformer-based</nobr></div> <div style="position:absolute;top:6393;left:200"><nobr>ViT-B/16 [<a href="#9">9]</a></nobr></div> <div style="position:absolute;top:6393;left:320"><nobr>384 × 384</nobr></div> <div style="position:absolute;top:6393;left:420"><nobr>77.9</nobr></div> <div style="position:absolute;top:6393;left:580"><nobr>86.4</nobr></div> <div style="position:absolute;top:6393;left:666"><nobr>55.5</nobr></div> <div style="position:absolute;top:6411;left:183"><nobr>ViT-B/16<font style="font-size:8px">∗ </font>[<a href="#9">9,</a> 36]</nobr></div> <div style="position:absolute;top:6411;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6411;left:420"><nobr>79.7</nobr></div> <div style="position:absolute;top:6410;left:505"><nobr>−</nobr></div> <div style="position:absolute;top:6411;left:580"><nobr>86.4</nobr></div> <div style="position:absolute;top:6411;left:666"><nobr>17.6</nobr></div> <div style="position:absolute;top:6429;left:192"><nobr>DeiT-B/16 [<a href="#10">38</a>]</nobr></div> <div style="position:absolute;top:6429;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6429;left:420"><nobr>81.8</nobr></div> <div style="position:absolute;top:6428;left:505"><nobr>−</nobr></div> <div style="position:absolute;top:6429;left:580"><nobr>86.4</nobr></div> <div style="position:absolute;top:6429;left:666"><nobr>17.6</nobr></div> <div style="position:absolute;top:6447;left:197"><nobr>PiT-B/16 [<a href="#9">16</a>]</nobr></div> <div style="position:absolute;top:6447;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6447;left:420"><nobr>82.0</nobr></div> <div style="position:absolute;top:6446;left:505"><nobr>−</nobr></div> <div style="position:absolute;top:6447;left:580"><nobr>73.8</nobr></div> <div style="position:absolute;top:6447;left:666"><nobr>12.5</nobr></div> <div style="position:absolute;top:6465;left:191"><nobr>PVT-Large [<a href="#10">42]</a></nobr></div> <div style="position:absolute;top:6465;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6465;left:420"><nobr>82.3</nobr></div> <div style="position:absolute;top:6464;left:505"><nobr>−</nobr></div> <div style="position:absolute;top:6465;left:580"><nobr>61.4</nobr></div> <div style="position:absolute;top:6465;left:670"><nobr>9.8</nobr></div> <div style="position:absolute;top:6483;left:202"><nobr>CPVT-B [<a href="#9">7</a>]</nobr></div> <div style="position:absolute;top:6483;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6483;left:420"><nobr>82.3</nobr></div> <div style="position:absolute;top:6482;left:505"><nobr>−</nobr></div> <div style="position:absolute;top:6483;left:585"><nobr>88</nobr></div> <div style="position:absolute;top:6483;left:666"><nobr>17.6</nobr></div> <div style="position:absolute;top:6501;left:203"><nobr>TNT-B [<a href="#9">13]</a></nobr></div> <div style="position:absolute;top:6501;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6501;left:420"><nobr>82.8</nobr></div> <div style="position:absolute;top:6501;left:497"><nobr>96.3</nobr></div> <div style="position:absolute;top:6501;left:580"><nobr>65.6</nobr></div> <div style="position:absolute;top:6501;left:666"><nobr>14.1</nobr></div> <div style="position:absolute;top:6519;left:185"><nobr>T2T-ViT<font style="font-size:8px">t</font>-24 <a href="#10">[46</a>]</nobr></div> <div style="position:absolute;top:6519;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6519;left:420"><nobr>82.6</nobr></div> <div style="position:absolute;top:6518;left:505"><nobr>−</nobr></div> <div style="position:absolute;top:6519;left:580"><nobr>65.1</nobr></div> <div style="position:absolute;top:6519;left:666"><nobr>15.0</nobr></div> <div style="position:absolute;top:6537;left:196"><nobr>CaiT-S32 [<a href="#10">40]</a></nobr></div> <div style="position:absolute;top:6537;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6537;left:420"><nobr>83.3</nobr></div> <div style="position:absolute;top:6536;left:505"><nobr>−</nobr></div> <div style="position:absolute;top:6537;left:585"><nobr>68</nobr></div> <div style="position:absolute;top:6537;left:666"><nobr>13.9</nobr></div> <div style="position:absolute;top:6555;left:201"><nobr>Swin-B [<a href="#9">28</a>]</nobr></div> <div style="position:absolute;top:6555;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6555;left:420"><nobr>83.3</nobr></div> <div style="position:absolute;top:6554;left:505"><nobr>−</nobr></div> <div style="position:absolute;top:6555;left:585"><nobr>88</nobr></div> <div style="position:absolute;top:6555;left:666"><nobr>15.4</nobr></div> <div style="position:absolute;top:6573;left:203"><nobr>Nest-B [<a href="#10">48</a>]</nobr></div> <div style="position:absolute;top:6572;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6572;left:420"><nobr>83.8</nobr></div> <div style="position:absolute;top:6572;left:505"><nobr>−</nobr></div> <div style="position:absolute;top:6572;left:585"><nobr>68</nobr></div> <div style="position:absolute;top:6572;left:666"><nobr>17.9</nobr></div> <div style="position:absolute;top:6591;left:195"><nobr>Container [<a href="#9">11</a>]</nobr></div> <div style="position:absolute;top:6590;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6590;left:420"><nobr>82.7</nobr></div> <div style="position:absolute;top:6590;left:505"><nobr>−</nobr></div> <div style="position:absolute;top:6590;left:580"><nobr>22.1</nobr></div> <div style="position:absolute;top:6590;left:670"><nobr>8.1</nobr></div> <div style="position:absolute;top:6610;left:350"><nobr>MLP-based (c = 768, N = 12)</nobr></div> <div style="position:absolute;top:6629;left:189"><nobr>Mixer-B/16 [<a href="#10">36</a>]</nobr></div> <div style="position:absolute;top:6629;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6629;left:420"><nobr>76.4</nobr></div> <div style="position:absolute;top:6628;left:505"><nobr>−</nobr></div> <div style="position:absolute;top:6629;left:585"><nobr>59</nobr></div> <div style="position:absolute;top:6629;left:666"><nobr>11.6</nobr></div> <div style="position:absolute;top:6647;left:216"><nobr>FF [<a href="#9">30]</a></nobr></div> <div style="position:absolute;top:6647;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6647;left:420"><nobr>74.9</nobr></div> <div style="position:absolute;top:6646;left:505"><nobr>−</nobr></div> <div style="position:absolute;top:6647;left:585"><nobr>59</nobr></div> <div style="position:absolute;top:6647;left:666"><nobr>11.6</nobr></div> <div style="position:absolute;top:6665;left:176"><nobr>S<font style="font-size:8px">2</font>-MLP-wide (ours)</nobr></div> <div style="position:absolute;top:6665;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6665;left:420"><nobr>80.0</nobr></div> <div style="position:absolute;top:6665;left:497"><nobr>94.8</nobr></div> <div style="position:absolute;top:6665;left:585"><nobr>71</nobr></div> <div style="position:absolute;top:6665;left:666"><nobr>14.0</nobr></div> <div style="position:absolute;top:6683;left:350"><nobr>MLP-based (c = 384, N = 36)</nobr></div> <div style="position:absolute;top:6703;left:188"><nobr>ResMLP-36 [<a href="#10">37]</a></nobr></div> <div style="position:absolute;top:6703;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6703;left:420"><nobr>79.7</nobr></div> <div style="position:absolute;top:6702;left:505"><nobr>−</nobr></div> <div style="position:absolute;top:6703;left:585"><nobr>45</nobr></div> <div style="position:absolute;top:6703;left:670"><nobr>8.9</nobr></div> <div style="position:absolute;top:6721;left:177"><nobr>S<font style="font-size:8px">2</font>-MLP-deep (ours)</nobr></div> <div style="position:absolute;top:6721;left:320"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:6721;left:420"><nobr>80.7</nobr></div> <div style="position:absolute;top:6721;left:497"><nobr>95.4</nobr></div> <div style="position:absolute;top:6721;left:585"><nobr>51</nobr></div> <div style="position:absolute;top:6721;left:666"><nobr>10.5</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:11px;font-family:Times"> <div style="position:absolute;top:6745;left:75"><nobr>Table 3. Results on ImageNet-1K without extra data. ViT-B/16<font style="font-size:6px">* </font>denotes the ViT-B/16 model in MLP-mixer [36<a href="#10">] </a>with extra regularization.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:6781;left:75"><nobr>achieve as high recognition accuracy as the state-of-the-art</nobr></div> <div style="position:absolute;top:6799;left:75"><nobr>Transformer-based vision models such as CaiT, Swin-B and</nobr></div> <div style="position:absolute;top:6817;left:75"><nobr>Nest-B. Meanwhile, the state-of-the-art Transformer-base vi-</nobr></div> <div style="position:absolute;top:6835;left:75"><nobr>sion model, Nest-B, cannot achieve better trade-off between</nobr></div> <div style="position:absolute;top:6852;left:75"><nobr>the recognition accuracy and efficiency compared with the</nobr></div> <div style="position:absolute;top:6870;left:75"><nobr>state-of-the-art CNN model, EfficientNet-B5 [<a href="#10">35]</a>.</nobr></div> <div style="position:absolute;top:6895;left:93"><nobr>After that, we compare our S<font style="font-size:8px">2</font>-MLP architecture with its</nobr></div> <div style="position:absolute;top:6913;left:75"><nobr>MLP counterparts which are recently proposed including</nobr></div> <div style="position:absolute;top:6931;left:75"><nobr>MLP-mixer, FF [<a href="#9">30</a>], ResMLP-36 [37]. Among them, MLP-</nobr></div> <div style="position:absolute;top:6949;left:75"><nobr>mixer, FF and ResMLP-36 adopt a similar structure. A differ-</nobr></div> <div style="position:absolute;top:6967;left:75"><nobr>ence between ResMLP-36 and MLP-mixer is that ResMLP-</nobr></div> <div style="position:absolute;top:6985;left:75"><nobr>36 develops an affine transformation layer to replace the</nobr></div> <div style="position:absolute;top:7003;left:75"><nobr>layer normalization for a more stable training. Meanwhile,</nobr></div> <div style="position:absolute;top:7021;left:75"><nobr>ResMLP-36 stacks more MLP layers than MLP-mixer but</nobr></div> <div style="position:absolute;top:7039;left:75"><nobr>uses a smaller hidden size. Specifically, ResMLP-36 adopts</nobr></div> <div style="position:absolute;top:7056;left:75"><nobr>36 MLP layers with a 384 hidden size. In contrast, MLP-</nobr></div> <div style="position:absolute;top:7075;left:75"><nobr>Mixer uses 12 MLP layers with a 768 hidden size. Through</nobr></div> <div style="position:absolute;top:7093;left:75"><nobr>a trade-off between the number of MLP layers and hidden</nobr></div> <div style="position:absolute;top:7111;left:75"><nobr>size, ResMLP-36 leads to a higher accuracy.</nobr></div> <div style="position:absolute;top:7136;left:93"><nobr>Our wide model, S<font style="font-size:8px">2</font>-MLP-wide adopts the wide settings</nobr></div> <div style="position:absolute;top:7153;left:75"><nobr>in Table<a href="#5"> 2</a>. Specifically, same as MLP-mixer and FF, S<font style="font-size:8px">2</font>-</nobr></div> <div style="position:absolute;top:7171;left:75"><nobr>MLP-wide adopts 12 blocks with hidden size 768. As shown</nobr></div> <div style="position:absolute;top:6781;left:463"><nobr>in Table<a href="#6"> </a>3, compared with MLP-mixer and FF, the proposed</nobr></div> <div style="position:absolute;top:6799;left:463"><nobr>S<font style="font-size:8px">2</font>-MLP-wide achieves a considerably higher recognition</nobr></div> <div style="position:absolute;top:6817;left:463"><nobr>accuracy. Specifically, MLP-mixer only achieves a 76.4%</nobr></div> <div style="position:absolute;top:6835;left:463"><nobr>top-1 accuracy and FF only achieves a 74.9% accuracy. In</nobr></div> <div style="position:absolute;top:6852;left:463"><nobr>contrast, the top-1 accuracy of the proposed S<font style="font-size:8px">2</font>-MLP-wide</nobr></div> <div style="position:absolute;top:6870;left:463"><nobr>is 80.0%. In parallel, our deep model, S<font style="font-size:8px">2</font>-MLP-deep adopts</nobr></div> <div style="position:absolute;top:6888;left:463"><nobr>the deep settings in Table<a href="#5"> </a>2. Specifically, same as ResMLP,</nobr></div> <div style="position:absolute;top:6906;left:463"><nobr>S<font style="font-size:8px">2</font>-MLP-deep adopts 36 blocks with hidden size 384. Mean-</nobr></div> <div style="position:absolute;top:6924;left:463"><nobr>while, we also use the affine transformation proposed in</nobr></div> <div style="position:absolute;top:6942;left:463"><nobr>ResMLP to replace layer normalization for a fair compari-</nobr></div> <div style="position:absolute;top:6960;left:463"><nobr>son. As shown in Table<a href="#6"> </a>3, compared with ResMLP-36, our</nobr></div> <div style="position:absolute;top:6978;left:463"><nobr>S<font style="font-size:8px">2</font>-MLP-deep achieves higher recognition accuracy. An-</nobr></div> <div style="position:absolute;top:6996;left:463"><nobr>other drawback of MLP-mixer and ResMLP is that, the size</nobr></div> <div style="position:absolute;top:7014;left:463"><nobr>of the weight matrix in token-mixer MLP, W ∈ R<font style="font-size:8px">N×N</font></nobr></div> <div style="position:absolute;top:7032;left:463"><nobr>(N = wh), is dependent on the feature map size. That is,</nobr></div> <div style="position:absolute;top:7050;left:463"><nobr>the structure of MLP-mixer as well as ResMLP varies as</nobr></div> <div style="position:absolute;top:7067;left:463"><nobr>the input scale changes. Thus, MLP-mixer and ResMLP</nobr></div> <div style="position:absolute;top:7086;left:463"><nobr>trained on the feature map of 14×14 size generated from an</nobr></div> <div style="position:absolute;top:7104;left:463"><nobr>image of 224 × 224 size can not process the feature map of</nobr></div> <div style="position:absolute;top:7121;left:463"><nobr>28 × 28 size from an image of 448 × 448 size. In contrast,</nobr></div> <div style="position:absolute;top:7139;left:463"><nobr>the architecture of our S<font style="font-size:8px">2</font>-MLP is invariant to the input scale.</nobr></div> <div style="position:absolute;top:7171;left:463"><nobr>Transfer learning. We testify the performance of our S<font style="font-size:8px">2</font>-</nobr></div> </span></font> <font size=3 color="#7f7f7f" face="Times"><span style="font-size:11px;font-family:Times;color:#7f7f7f"> <div style="position:absolute;top:7241;left:449"><nobr>302</nobr></div> </span></font> <a href="#10" style="border:1px solid #0000ff;position:absolute;top:6410;left:272;width:18;height:13;display:block"></a><a href="#10" style="border:1px solid #0000ff;position:absolute;top:6930;left:283;width:17;height:13;display:block"></a> <div style="position:absolute;top:7303;left:0"><hr><table border=0 width=100%><tr><td bgcolor=eeeeee align=right><font face=arial,sans-serif><a name=7><b>Page 7</b></a></font></td></tr></table></div><font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:7415;left:133"><nobr>method</nobr></div> <div style="position:absolute;top:7415;left:229"><nobr>scale C10</nobr></div> <div style="position:absolute;top:7415;left:323"><nobr>C100</nobr></div> <div style="position:absolute;top:7415;left:376"><nobr>Car</nobr></div> <div style="position:absolute;top:7434;left:117"><nobr>ViT-B/16 [<a href="#9">9</a>]</nobr></div> <div style="position:absolute;top:7434;left:233"><nobr>384</nobr></div> <div style="position:absolute;top:7434;left:278"><nobr>98.1</nobr></div> <div style="position:absolute;top:7434;left:326"><nobr>87.1</nobr></div> <div style="position:absolute;top:7433;left:381"><nobr>−</nobr></div> <div style="position:absolute;top:7452;left:118"><nobr>ViT-L/16 [<a href="#9">9</a>]</nobr></div> <div style="position:absolute;top:7452;left:233"><nobr>384</nobr></div> <div style="position:absolute;top:7451;left:278"><nobr>97.9</nobr></div> <div style="position:absolute;top:7451;left:326"><nobr>86.4</nobr></div> <div style="position:absolute;top:7451;left:381"><nobr>−</nobr></div> <div style="position:absolute;top:7470;left:119"><nobr>DeiT-B [<a href="#10">38]</a></nobr></div> <div style="position:absolute;top:7470;left:233"><nobr>224</nobr></div> <div style="position:absolute;top:7469;left:278"><nobr>99.1</nobr></div> <div style="position:absolute;top:7469;left:326"><nobr>90.8</nobr></div> <div style="position:absolute;top:7469;left:374"><nobr>92.1</nobr></div> <div style="position:absolute;top:7488;left:101"><nobr>ResMLP-S12 [<a href="#10">37</a>]</nobr></div> <div style="position:absolute;top:7488;left:233"><nobr>224</nobr></div> <div style="position:absolute;top:7487;left:278"><nobr>98.1</nobr></div> <div style="position:absolute;top:7487;left:326"><nobr>87.0</nobr></div> <div style="position:absolute;top:7487;left:374"><nobr>84.6</nobr></div> <div style="position:absolute;top:7505;left:101"><nobr>ResMLP-S24 [<a href="#10">37</a>]</nobr></div> <div style="position:absolute;top:7505;left:233"><nobr>224</nobr></div> <div style="position:absolute;top:7505;left:278"><nobr>98.7</nobr></div> <div style="position:absolute;top:7505;left:326"><nobr>89.5</nobr></div> <div style="position:absolute;top:7505;left:374"><nobr>89.5</nobr></div> <div style="position:absolute;top:7523;left:116"><nobr>ResMLP-36<font style="font-size:8px">∗</font></nobr></div> <div style="position:absolute;top:7523;left:233"><nobr>224</nobr></div> <div style="position:absolute;top:7523;left:278"><nobr>98.7</nobr></div> <div style="position:absolute;top:7523;left:326"><nobr>88.5</nobr></div> <div style="position:absolute;top:7523;left:374"><nobr>91.0</nobr></div> <div style="position:absolute;top:7541;left:114"><nobr>S<font style="font-size:8px">2</font>-MLP-deep</nobr></div> <div style="position:absolute;top:7541;left:233"><nobr>224</nobr></div> <div style="position:absolute;top:7541;left:278"><nobr>98.8</nobr></div> <div style="position:absolute;top:7541;left:326"><nobr>89.4</nobr></div> <div style="position:absolute;top:7541;left:374"><nobr>93.1</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:11px;font-family:Times"> <div style="position:absolute;top:7564;left:75"><nobr>Table 4. The performance of transfer learning on CIFAR10 (C10),</nobr></div> <div style="position:absolute;top:7581;left:75"><nobr>CIFAR100 (C100) and Car. ResMLP-36<font style="font-size:6px">* </font>denotes the performance</nobr></div> <div style="position:absolute;top:7597;left:75"><nobr>of fine-tuned ResMLP-36 with the same settings as S<font style="font-size:6px">2</font>-MLP-deep.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:7630;left:75"><nobr>MLP in transfer leanring. We load the pre-trained model on</nobr></div> <div style="position:absolute;top:7648;left:75"><nobr>ImageNet-1K dataset and fine-tune it on the target datasets</nobr></div> <div style="position:absolute;top:7666;left:75"><nobr>including CIFAR10/100 [<a href="#9">22]</a> and Stanford Car [21]. Among</nobr></div> <div style="position:absolute;top:7684;left:75"><nobr>them, CIFAR10 and CIFAR100 contain 50, 000 training</nobr></div> <div style="position:absolute;top:7702;left:75"><nobr>images. In contrast, Stanford Car only contains 8, 144 train-</nobr></div> <div style="position:absolute;top:7720;left:75"><nobr>ing images, which is indeed a good testbed for over-fitting.</nobr></div> <div style="position:absolute;top:7738;left:75"><nobr>As shown in Table <a href="#7">4,</a> using 224 × 224-scale images, our</nobr></div> <div style="position:absolute;top:7755;left:75"><nobr>S<font style="font-size:8px">2</font>-MLP-deep achieves better performance than ViT-B/16</nobr></div> <div style="position:absolute;top:7773;left:75"><nobr>and ViT-L/16 using 384 × 384-scale images. Meanwhile,</nobr></div> <div style="position:absolute;top:7791;left:75"><nobr>our S<font style="font-size:8px">2</font>-MLP-deep achieves the comparable performance as</nobr></div> <div style="position:absolute;top:7809;left:75"><nobr>DeiT-B [<a href="#10">38]</a> with considerably fewer FLOPs. Meanwhile,</nobr></div> <div style="position:absolute;top:7827;left:75"><nobr>we compare with ResMLP-S12/24 <a href="#10">[37</a>] and ResMLP-36</nobr></div> <div style="position:absolute;top:7845;left:75"><nobr>fine-tuned based on the same settings as our S<font style="font-size:8px">2</font>-MLP-deep.</nobr></div> <div style="position:absolute;top:7863;left:75"><nobr>As shown in the table, on CIFAR100 and Car datasets, our</nobr></div> <div style="position:absolute;top:7881;left:75"><nobr>S<font style="font-size:8px">2</font>-MLP-deep considerably outperforms ResMLP-36. Espe-</nobr></div> <div style="position:absolute;top:7899;left:75"><nobr>cially, on Sanford Car dataset, our S<font style="font-size:8px">2</font>-MLP-deep achieves a</nobr></div> <div style="position:absolute;top:7917;left:75"><nobr>93.1 top-1 accuracy whereas ResMLP-36 only obtains a 91.0</nobr></div> <div style="position:absolute;top:7935;left:75"><nobr>accuracy. The considerably worse performance of ResMLP-</nobr></div> <div style="position:absolute;top:7953;left:75"><nobr>36 on the small scale Car dataset reveals the fact that the</nobr></div> <div style="position:absolute;top:7971;left:75"><nobr>token-mixing MLP is more prone to over-fitting compared</nobr></div> <div style="position:absolute;top:7989;left:75"><nobr>with our spatial-shift operation.</nobr></div> <div style="position:absolute;top:8031;left:171"><nobr>configuration</nobr></div> <div style="position:absolute;top:8031;left:283"><nobr>Top-1 (%)</nobr></div> <div style="position:absolute;top:8050;left:164"><nobr>depthwise conv</nobr></div> <div style="position:absolute;top:8050;left:301"><nobr>80.5</nobr></div> <div style="position:absolute;top:8068;left:156"><nobr>spatial shift (ours)</nobr></div> <div style="position:absolute;top:8067;left:301"><nobr>80.7</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:11px;font-family:Times"> <div style="position:absolute;top:8097;left:75"><nobr>Table 5. The performance of the 3 × 3 depthwise convolution.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:8126;left:75"><nobr>Depthwise convolution. As we mentioned in Section <a href="#4">3.3,</a></nobr></div> <div style="position:absolute;top:8144;left:75"><nobr>our spatial-shift operation is a variant of 3 × 3 depthwise</nobr></div> <div style="position:absolute;top:8162;left:75"><nobr>convolution with a fixed kernel. We believe that when re-</nobr></div> <div style="position:absolute;top:8180;left:75"><nobr>placing the spatial-shift operation with a depthwise convo-</nobr></div> <div style="position:absolute;top:8198;left:75"><nobr>lution, it can still achieve a good performance. But the</nobr></div> <div style="position:absolute;top:8216;left:75"><nobr>3 × 3 depthwise convolution will bring more parameters and</nobr></div> <div style="position:absolute;top:8234;left:75"><nobr>FLOPs compared with our paramter-free and computation-</nobr></div> <div style="position:absolute;top:8252;left:75"><nobr>free spatial-shift operation. To validate it, we conduct the</nobr></div> <div style="position:absolute;top:8270;left:75"><nobr>experiment by replacing the spatial-shift operation with a</nobr></div> <div style="position:absolute;top:8287;left:75"><nobr>3 × 3 depthwise convolution. As shown in Table 5<a href="#7">, </a>interest-</nobr></div> <div style="position:absolute;top:8306;left:75"><nobr>ingly, the proposed spatial-shift operation equivalent to 3×3</nobr></div> <div style="position:absolute;top:8324;left:75"><nobr>depthwise convolution with pre-defined kernel achieves com-</nobr></div> <div style="position:absolute;top:8341;left:75"><nobr>parable accuracy as the 3 × 3 depthwise convolution with</nobr></div> <div style="position:absolute;top:8359;left:75"><nobr>learned weights from the data.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:14px;font-family:Times"> <div style="position:absolute;top:7415;left:463"><nobr>4.2. Ablation study</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:7443;left:481"><nobr>Due to limited computing resources, the ablation study is</nobr></div> <div style="position:absolute;top:7461;left:463"><nobr>conducted on ImageNet100, which is a subset of ImageNet-</nobr></div> <div style="position:absolute;top:7479;left:462"><nobr>1K containing images of randomly selected 100 categories.</nobr></div> <div style="position:absolute;top:7497;left:463"><nobr>Meanwhile, due to the limited space, the ablation study</nobr></div> <div style="position:absolute;top:7515;left:463"><nobr>in this section only includes that with the wide settings.</nobr></div> <div style="position:absolute;top:7533;left:463"><nobr>We move that with the deep settings in the supplementary</nobr></div> <div style="position:absolute;top:7551;left:463"><nobr>material. We only change one hyperparameter each time and</nobr></div> <div style="position:absolute;top:7569;left:463"><nobr>keep the others the same as the wide settings in Table <a href="#5">2.</a></nobr></div> <div style="position:absolute;top:7605;left:476"><nobr>r</nobr></div> <div style="position:absolute;top:7605;left:502"><nobr>Top-1 (%) Top-5 (%) Para. (M) FLOPs (B)</nobr></div> <div style="position:absolute;top:7623;left:476"><nobr>1</nobr></div> <div style="position:absolute;top:7623;left:520"><nobr>86.1</nobr></div> <div style="position:absolute;top:7623;left:600"><nobr>96.7</nobr></div> <div style="position:absolute;top:7623;left:684"><nobr>29</nobr></div> <div style="position:absolute;top:7623;left:762"><nobr>5.7</nobr></div> <div style="position:absolute;top:7641;left:476"><nobr>2</nobr></div> <div style="position:absolute;top:7641;left:520"><nobr>86.4</nobr></div> <div style="position:absolute;top:7641;left:600"><nobr>96.9</nobr></div> <div style="position:absolute;top:7641;left:684"><nobr>43</nobr></div> <div style="position:absolute;top:7641;left:762"><nobr>8.4</nobr></div> <div style="position:absolute;top:7659;left:476"><nobr>3</nobr></div> <div style="position:absolute;top:7659;left:520"><nobr>87.0</nobr></div> <div style="position:absolute;top:7659;left:600"><nobr>96.8</nobr></div> <div style="position:absolute;top:7659;left:684"><nobr>57</nobr></div> <div style="position:absolute;top:7659;left:764"><nobr>11</nobr></div> <div style="position:absolute;top:7677;left:476"><nobr>4</nobr></div> <div style="position:absolute;top:7677;left:520"><nobr>87.1</nobr></div> <div style="position:absolute;top:7677;left:600"><nobr>97.1</nobr></div> <div style="position:absolute;top:7677;left:684"><nobr>71</nobr></div> <div style="position:absolute;top:7677;left:764"><nobr>14</nobr></div> <div style="position:absolute;top:7695;left:476"><nobr>5</nobr></div> <div style="position:absolute;top:7695;left:520"><nobr>86.6</nobr></div> <div style="position:absolute;top:7695;left:600"><nobr>96.8</nobr></div> <div style="position:absolute;top:7695;left:684"><nobr>86</nobr></div> <div style="position:absolute;top:7695;left:764"><nobr>17</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:11px;font-family:Times"> <div style="position:absolute;top:7714;left:512"><nobr>Table 6. The influence of the expansion ratio, r.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:7744;left:463"><nobr>Expansion ratio. Recall that the weights of the third layer</nobr></div> <div style="position:absolute;top:7762;left:463"><nobr>and the fourth fully-connected layer, W<font style="font-size:8px">3 </font>∈ R<font style="font-size:8px">rc×c </font>and</nobr></div> <div style="position:absolute;top:7779;left:462"><nobr>W<font style="font-size:8px">4 </font>∈ R<font style="font-size:8px">c×rc</font>. r determines the modeling capability of these</nobr></div> <div style="position:absolute;top:7798;left:463"><nobr>two fully-connected layers in each S<font style="font-size:8px">2</font>-MLP block. Table <a href="#7">6</a></nobr></div> <div style="position:absolute;top:7815;left:463"><nobr>shows the influence of r. As shown in the table, the top-1</nobr></div> <div style="position:absolute;top:7833;left:463"><nobr>accuracy increases from 86.1% to 87.0% as r increases from</nobr></div> <div style="position:absolute;top:7851;left:463"><nobr>1 to 3. Meanwhile, the number of parameters increases from</nobr></div> <div style="position:absolute;top:7869;left:463"><nobr>29M to 57M accordingly. But the accuracy saturates and</nobr></div> <div style="position:absolute;top:7887;left:463"><nobr>even turns worse when r surpasses 3. This might be due</nobr></div> <div style="position:absolute;top:7905;left:463"><nobr>to the fact that ImagetNet100 is too small and our model</nobr></div> <div style="position:absolute;top:7923;left:463"><nobr>suffers from over-fitting when r is large.</nobr></div> <div style="position:absolute;top:7959;left:480"><nobr>c</nobr></div> <div style="position:absolute;top:7959;left:513"><nobr>Top-1 (%) Top-5 (%) Para. (M) FLOPs (B)</nobr></div> <div style="position:absolute;top:7978;left:472"><nobr>192</nobr></div> <div style="position:absolute;top:7978;left:531"><nobr>79.7</nobr></div> <div style="position:absolute;top:7978;left:611"><nobr>94.7</nobr></div> <div style="position:absolute;top:7978;left:693"><nobr>4.3</nobr></div> <div style="position:absolute;top:7978;left:773"><nobr>0.9</nobr></div> <div style="position:absolute;top:7995;left:472"><nobr>384</nobr></div> <div style="position:absolute;top:7995;left:531"><nobr>85.3</nobr></div> <div style="position:absolute;top:7995;left:611"><nobr>96.6</nobr></div> <div style="position:absolute;top:7995;left:695"><nobr>17</nobr></div> <div style="position:absolute;top:7995;left:773"><nobr>3.5</nobr></div> <div style="position:absolute;top:8013;left:472"><nobr>576</nobr></div> <div style="position:absolute;top:8013;left:531"><nobr>85.7</nobr></div> <div style="position:absolute;top:8013;left:611"><nobr>96.7</nobr></div> <div style="position:absolute;top:8013;left:695"><nobr>38</nobr></div> <div style="position:absolute;top:8013;left:773"><nobr>7.9</nobr></div> <div style="position:absolute;top:8031;left:472"><nobr>768</nobr></div> <div style="position:absolute;top:8031;left:531"><nobr>87.1</nobr></div> <div style="position:absolute;top:8031;left:611"><nobr>97.1</nobr></div> <div style="position:absolute;top:8031;left:695"><nobr>71</nobr></div> <div style="position:absolute;top:8031;left:775"><nobr>14</nobr></div> <div style="position:absolute;top:8049;left:472"><nobr>960</nobr></div> <div style="position:absolute;top:8049;left:531"><nobr>87.0</nobr></div> <div style="position:absolute;top:8049;left:611"><nobr>97.0</nobr></div> <div style="position:absolute;top:8049;left:691"><nobr>106</nobr></div> <div style="position:absolute;top:8049;left:775"><nobr>20</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:11px;font-family:Times"> <div style="position:absolute;top:8068;left:523"><nobr>Table 7. The influence of the hidden size, c.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:8102;left:463"><nobr>Hidden size. The hidden size (c) in MLPs of S<font style="font-size:8px">2</font>-MLP</nobr></div> <div style="position:absolute;top:8120;left:463"><nobr>blocks also determine the modeling capability of the pro-</nobr></div> <div style="position:absolute;top:8138;left:463"><nobr>posed S<font style="font-size:8px">2</font>-MLP architecture. In Table <a href="#7">7</a>, we show the in-</nobr></div> <div style="position:absolute;top:8156;left:463"><nobr>fluence of c. As shown in the table, the top-1 recognition</nobr></div> <div style="position:absolute;top:8174;left:463"><nobr>accuracy increases from 79.7% to 87.1% as the hidden size</nobr></div> <div style="position:absolute;top:8192;left:463"><nobr>c increases from 192 and 768, and the number of parameters</nobr></div> <div style="position:absolute;top:8210;left:463"><nobr>increases from 4.3M to 71M, and FLOPs increases from</nobr></div> <div style="position:absolute;top:8228;left:463"><nobr>0.9G to 20G. Meanwhile, the recognition accuracy saturates</nobr></div> <div style="position:absolute;top:8246;left:463"><nobr>when c surpasses 768. Taking both accuracy and efficiency</nobr></div> <div style="position:absolute;top:8264;left:463"><nobr>into consideration, we set c = 768, by default.</nobr></div> <div style="position:absolute;top:8287;left:463"><nobr>Shifting directions. By default, we split 768 channels into</nobr></div> <div style="position:absolute;top:8306;left:463"><nobr>four groups and shift them along four directions as Figure <a href="#8">2</a></nobr></div> <div style="position:absolute;top:8324;left:463"><nobr>(a). We also attempt other shifting settings. (b) splits the</nobr></div> <div style="position:absolute;top:8341;left:463"><nobr>channels into 8 groups, and shift them along eight directions.</nobr></div> <div style="position:absolute;top:8359;left:463"><nobr>(c), (d), (e), and (f) split the channels into two groups, and</nobr></div> </span></font> <font size=3 color="#7f7f7f" face="Times"><span style="font-size:11px;font-family:Times;color:#7f7f7f"> <div style="position:absolute;top:8429;left:449"><nobr>303</nobr></div> </span></font> <a href="#9" style="border:1px solid #0000ff;position:absolute;top:7664;left:356;width:18;height:13;display:block"></a> <div style="position:absolute;top:8491;left:0"><hr><table border=0 width=100%><tr><td bgcolor=eeeeee align=right><font face=arial,sans-serif><a name=8><b>Page 8</b></a></font></td></tr></table></div><font size=3 face="Times"><span style="font-size:11px;font-family:Times"> <div style="position:absolute;top:8679;left:230"><nobr>(a)</nobr></div> <div style="position:absolute;top:8678;left:333"><nobr>(b)</nobr></div> <div style="position:absolute;top:8678;left:437"><nobr>(c)</nobr></div> <div style="position:absolute;top:8679;left:541"><nobr>(d)</nobr></div> <div style="position:absolute;top:8679;left:644"><nobr>(e)</nobr></div> <div style="position:absolute;top:8783;left:232"><nobr>(f)</nobr></div> <div style="position:absolute;top:8782;left:333"><nobr>(g)</nobr></div> <div style="position:absolute;top:8782;left:437"><nobr>(h)</nobr></div> <div style="position:absolute;top:8784;left:544"><nobr>(i)</nobr></div> <div style="position:absolute;top:8784;left:647"><nobr>(j)</nobr></div> <div style="position:absolute;top:8808;left:75"><nobr>Figure 2. Ten different shifting settings. (a) is the default option which shifts channels along four directions. (b) shifts channels along eight</nobr></div> <div style="position:absolute;top:8824;left:75"><nobr>directions. (c),(d),(e), and (f) shift channels in two directions. (g), (h), (i), and (j) shift channels along a single direction.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:8858;left:176"><nobr>Settings</nobr></div> <div style="position:absolute;top:8858;left:253"><nobr>(a)</nobr></div> <div style="position:absolute;top:8858;left:298"><nobr>(b)</nobr></div> <div style="position:absolute;top:8858;left:342"><nobr>(c)</nobr></div> <div style="position:absolute;top:8858;left:387"><nobr>(d)</nobr></div> <div style="position:absolute;top:8858;left:431"><nobr>(e)</nobr></div> <div style="position:absolute;top:8858;left:477"><nobr>(f)</nobr></div> <div style="position:absolute;top:8858;left:520"><nobr>(g)</nobr></div> <div style="position:absolute;top:8858;left:565"><nobr>(h)</nobr></div> <div style="position:absolute;top:8858;left:611"><nobr>(i)</nobr></div> <div style="position:absolute;top:8858;left:655"><nobr>(j)</nobr></div> <div style="position:absolute;top:8858;left:696"><nobr>w/o</nobr></div> <div style="position:absolute;top:8877;left:169"><nobr>Top-1 (%) 87.1 87.0 85.0 85.1 79.5 80.5 77.7 77.5 78.3 78.4 56.7</nobr></div> <div style="position:absolute;top:8895;left:169"><nobr>Top-5 (%) 97.1 97.1 96.1 96.2 93.1 93.7 92.7 92.5 93.4 93.4 81.0</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:11px;font-family:Times"> <div style="position:absolute;top:8918;left:327"><nobr>Table 8. The influence of shifting directions.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:8952;left:75"><nobr>shift them along two directions. (g), (h), (i), and (j) shift all</nobr></div> <div style="position:absolute;top:8970;left:75"><nobr>channels along a single direction. In Table<a href="#8"> </a>8, we show the</nobr></div> <div style="position:absolute;top:8988;left:75"><nobr>recognition accuracy of our S<font style="font-size:8px">2</font>-MLP with shifting from (a) to</nobr></div> <div style="position:absolute;top:9006;left:75"><nobr>(j). Meanwhile, we also show that achieved by S<font style="font-size:8px">2</font>-MLP with-</nobr></div> <div style="position:absolute;top:9024;left:75"><nobr>out (w/o) shifting. As shown in the table, without shifting,</nobr></div> <div style="position:absolute;top:9042;left:75"><nobr>the network performs poorly due to a lack of communica-</nobr></div> <div style="position:absolute;top:9060;left:75"><nobr>tions between patches. Meanwhile, comparing (e) with (f),</nobr></div> <div style="position:absolute;top:9078;left:75"><nobr>we discover that the horizontal shifting is more useful than</nobr></div> <div style="position:absolute;top:9095;left:75"><nobr>the vertical shifting. Comparing (c) with (e)/(f), we observe</nobr></div> <div style="position:absolute;top:9113;left:75"><nobr>that shifting in two dimensions (both horizontal and vertical)</nobr></div> <div style="position:absolute;top:9131;left:75"><nobr>will be helpful than shifting in a single dimension (horizontal</nobr></div> <div style="position:absolute;top:9149;left:75"><nobr>or vertical). Moreover, comparing (a) and (b), we conclude</nobr></div> <div style="position:absolute;top:9167;left:75"><nobr>that shifting along four directions is enough. Overall, the</nobr></div> <div style="position:absolute;top:9185;left:75"><nobr>default shifting configuration, (a), the most natural way for</nobr></div> <div style="position:absolute;top:9203;left:75"><nobr>shifting, achieves excellent performance.</nobr></div> <div style="position:absolute;top:9222;left:75"><nobr>Input scale. The input image is resized into W × H before</nobr></div> <div style="position:absolute;top:9241;left:75"><nobr>being fed into the network. When the patch size p is fixed,</nobr></div> <div style="position:absolute;top:9258;left:75"><nobr>the image of larger scale will generate more patches, which</nobr></div> <div style="position:absolute;top:9276;left:75"><nobr>will inevitably bring more computational cost. But a larger</nobr></div> <div style="position:absolute;top:9294;left:75"><nobr>scale is beneficial for modeling fine-grained details in the</nobr></div> <div style="position:absolute;top:9312;left:75"><nobr>image, and generally leads to higher recognition accuracy.</nobr></div> <div style="position:absolute;top:9330;left:75"><nobr>Table <a href="#8">9</a> shows the influence of the input image scale. As</nobr></div> <div style="position:absolute;top:9348;left:75"><nobr>shown in the table, when W × H increases from 112 × 112</nobr></div> <div style="position:absolute;top:9366;left:75"><nobr>to 336 × 336, the top-1 recognition accuracy improves from</nobr></div> <div style="position:absolute;top:9384;left:75"><nobr>80.6% to 88.2%, the number of parameters keeps unchanged</nobr></div> <div style="position:absolute;top:9402;left:75"><nobr>since the network does not change, and the FLOPs also</nobr></div> <div style="position:absolute;top:9420;left:75"><nobr>increases from 3.5G to 31G. When the input scale increases</nobr></div> <div style="position:absolute;top:9438;left:75"><nobr>from 224×224 to 384×384, the gain in recognition accuracy</nobr></div> <div style="position:absolute;top:9472;left:94"><nobr>W × H</nobr></div> <div style="position:absolute;top:9472;left:168"><nobr>Top-1 (%) Top-5 (%) Para. FLOPs</nobr></div> <div style="position:absolute;top:9491;left:86"><nobr>112 × 112</nobr></div> <div style="position:absolute;top:9491;left:185"><nobr>80.6</nobr></div> <div style="position:absolute;top:9491;left:265"><nobr>94.2</nobr></div> <div style="position:absolute;top:9491;left:329"><nobr>71M</nobr></div> <div style="position:absolute;top:9491;left:383"><nobr>3.5B</nobr></div> <div style="position:absolute;top:9508;left:86"><nobr>224 × 224</nobr></div> <div style="position:absolute;top:9508;left:185"><nobr>87.1</nobr></div> <div style="position:absolute;top:9508;left:265"><nobr>97.1</nobr></div> <div style="position:absolute;top:9508;left:329"><nobr>71M</nobr></div> <div style="position:absolute;top:9508;left:385"><nobr>14B</nobr></div> <div style="position:absolute;top:9526;left:86"><nobr>384 × 384</nobr></div> <div style="position:absolute;top:9526;left:185"><nobr>88.2</nobr></div> <div style="position:absolute;top:9526;left:265"><nobr>97.6</nobr></div> <div style="position:absolute;top:9526;left:329"><nobr>71M</nobr></div> <div style="position:absolute;top:9526;left:385"><nobr>31B</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:11px;font-family:Times"> <div style="position:absolute;top:9545;left:125"><nobr>Table 9. The influence of the input image scale.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:8952;left:463"><nobr>is not significant, but the FLOPs is doubled.</nobr></div> <div style="position:absolute;top:8974;left:481"><nobr>When the input image scale is fixed, the increase of patch</nobr></div> <div style="position:absolute;top:8992;left:463"><nobr>size will reduce the number of patches. The larger-size patch</nobr></div> <div style="position:absolute;top:9010;left:463"><nobr>enjoys high efficiency but is not good at capturing the fine-</nobr></div> <div style="position:absolute;top:9028;left:463"><nobr>level details. Thus, the larger-size patches cannot achieve</nobr></div> <div style="position:absolute;top:9046;left:463"><nobr>as high accuracy as their smaller counterparts. As shown</nobr></div> <div style="position:absolute;top:9064;left:463"><nobr>in Table <a href="#8">10</a>, when p increases from 16 to 32, it reduces</nobr></div> <div style="position:absolute;top:9082;left:463"><nobr>FLOPs from 14B to 3.5B. But it also leads to that the top-1</nobr></div> <div style="position:absolute;top:9100;left:463"><nobr>recognition accuracy drops from 87.1% to 81.0%.</nobr></div> <div style="position:absolute;top:9135;left:487"><nobr>p × p</nobr></div> <div style="position:absolute;top:9136;left:546"><nobr>Top-1 (%) Top-5 (%) Para. FLOPs</nobr></div> <div style="position:absolute;top:9154;left:480"><nobr>32 × 32</nobr></div> <div style="position:absolute;top:9154;left:564"><nobr>81.0</nobr></div> <div style="position:absolute;top:9154;left:644"><nobr>94.6</nobr></div> <div style="position:absolute;top:9154;left:707"><nobr>73M</nobr></div> <div style="position:absolute;top:9154;left:762"><nobr>3.5B</nobr></div> <div style="position:absolute;top:9172;left:480"><nobr>16 × 16</nobr></div> <div style="position:absolute;top:9172;left:564"><nobr>87.1</nobr></div> <div style="position:absolute;top:9172;left:644"><nobr>97.1</nobr></div> <div style="position:absolute;top:9172;left:707"><nobr>71M</nobr></div> <div style="position:absolute;top:9172;left:764"><nobr>14B</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:11px;font-family:Times"> <div style="position:absolute;top:9195;left:530"><nobr>Table 10. The influence of the patch size.</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:15px;font-family:Times"> <div style="position:absolute;top:9230;left:463"><nobr>5. Conclusion</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:12px;font-family:Times"> <div style="position:absolute;top:9260;left:481"><nobr>In this paper, we propose a spatial shift MLP (S<font style="font-size:8px">2</font>-MLP)</nobr></div> <div style="position:absolute;top:9278;left:463"><nobr>architecture. It adopts a pure MLP structure without con-</nobr></div> <div style="position:absolute;top:9296;left:463"><nobr>volution and self-attention. To achieve the communications</nobr></div> <div style="position:absolute;top:9314;left:463"><nobr>between spatial locations, we adopt a spatial shift operation,</nobr></div> <div style="position:absolute;top:9332;left:463"><nobr>which is simple, parameter-free, and efficient. On ImageNet-</nobr></div> <div style="position:absolute;top:9350;left:462"><nobr>1K dataset, it achieves considerably higher recognition ac-</nobr></div> <div style="position:absolute;top:9368;left:463"><nobr>curacy than the pioneering work, ViT, with a comparable</nobr></div> <div style="position:absolute;top:9386;left:463"><nobr>number of parameters and FLOPs. Meanwhile, it takes a</nobr></div> <div style="position:absolute;top:9404;left:463"><nobr>much simpler architecture, less number of parameters and</nobr></div> <div style="position:absolute;top:9422;left:463"><nobr>FLOPs compared with its ViT counterpart. Moreover, we</nobr></div> <div style="position:absolute;top:9440;left:463"><nobr>also discuss the relations among the spatial shifting opera-</nobr></div> <div style="position:absolute;top:9458;left:463"><nobr>tion, token-mixing MLP in MLP-mixer, and the depthwise</nobr></div> <div style="position:absolute;top:9476;left:463"><nobr>convolution. We discover that both token-mixing MLP and</nobr></div> <div style="position:absolute;top:9494;left:463"><nobr>the proposed spatial-shift operation are variants of the depth-</nobr></div> <div style="position:absolute;top:9512;left:463"><nobr>wise convolution. We hope that these results and discussions</nobr></div> <div style="position:absolute;top:9529;left:463"><nobr>could inspire further research to discover simpler and more</nobr></div> <div style="position:absolute;top:9547;left:463"><nobr>effective vision architecture in the future.</nobr></div> </span></font> <font size=3 color="#7f7f7f" face="Times"><span style="font-size:11px;font-family:Times;color:#7f7f7f"> <div style="position:absolute;top:9617;left:449"><nobr>304</nobr></div> </span></font> <div style="position:absolute;top:9679;left:0"><hr><table border=0 width=100%><tr><td bgcolor=eeeeee align=right><font face=arial,sans-serif><a name=9><b>Page 9</b></a></font></td></tr></table></div><font size=3 face="Times"><span style="font-size:15px;font-family:Times"> <div style="position:absolute;top:9790;left:75"><nobr>References</nobr></div> </span></font> <font size=3 face="Times"><span style="font-size:11px;font-family:Times"> <div style="position:absolute;top:9820;left:82"><nobr>[1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton.</nobr></div> <div style="position:absolute;top:9837;left:105"><nobr>Layer normalization. arXiv preprint arXiv:1607.06450, 2016.</nobr></div> <div style="position:absolute;top:9855;left:82"><nobr>[2] Andrew Brown, Pascal Mettes, and Marcel Worring. 4-</nobr></div> <div style="position:absolute;top:9872;left:105"><nobr>connected shift residual networks. In Proceedings of the</nobr></div> <div style="position:absolute;top:9888;left:105"><nobr>2019 IEEE/CVF International Conference on Computer Vi-</nobr></div> <div style="position:absolute;top:9905;left:105"><nobr>sion (ICCV) Workshops, pages 1990–1997, Seoul, Korea,</nobr></div> <div style="position:absolute;top:9921;left:105"><nobr>2019.</nobr></div> <div style="position:absolute;top:9940;left:82"><nobr>[3] Shoufa Chen, Enze Xie, Chongjian Ge, Ding Liang, and Ping</nobr></div> <div style="position:absolute;top:9956;left:105"><nobr>Luo. Cyclemlp: A mlp-like architecture for dense prediction.</nobr></div> <div style="position:absolute;top:9973;left:105"><nobr>arXiv preprint arXiv:2107.10224, 2021.</nobr></div> <div style="position:absolute;top:9991;left:82"><nobr>[4] Shuo Chen, Tan Yu, and Ping Li. Mvt: Multi-view vision</nobr></div> <div style="position:absolute;top:10008;left:105"><nobr>transformer for 3d object recognition. In Proceedings of the</nobr></div> <div style="position:absolute;top:10024;left:104"><nobr>32th British Machine Vision Conference (BMVC), 2021.</nobr></div> <div style="position:absolute;top:10043;left:82"><nobr>[5] François Chollet. Xception: Deep learning with depthwise</nobr></div> <div style="position:absolute;top:10059;left:105"><nobr>separable convolutions. In Proceedings of the 2017 IEEE Con-</nobr></div> <div style="position:absolute;top:10076;left:105"><nobr>ference on Computer Vision and Pattern Recognition (CVPR)</nobr></div> <div style="position:absolute;top:10092;left:105"><nobr>2017, pages 1800–1807, Honolulu, HI, 2017.</nobr></div> <div style="position:absolute;top:10111;left:82"><nobr>[6] Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing</nobr></div> <div style="position:absolute;top:10127;left:105"><nobr>Ren, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Twins: Re-</nobr></div> <div style="position:absolute;top:10144;left:105"><nobr>visiting the design of spatial attention in vision transformers.</nobr></div> <div style="position:absolute;top:10160;left:105"><nobr>arXiv preprint arXiv:2104.13840, 2021.</nobr></div> <div style="position:absolute;top:10179;left:82"><nobr>[7] Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, Xi-</nobr></div> <div style="position:absolute;top:10195;left:105"><nobr>aolin Wei, Huaxia Xia, and Chunhua Shen. Conditional</nobr></div> <div style="position:absolute;top:10212;left:105"><nobr>positional encodings for vision transformers. arXiv preprint</nobr></div> <div style="position:absolute;top:10228;left:105"><nobr>arXiv:2102.10882, 2021.</nobr></div> <div style="position:absolute;top:10247;left:82"><nobr>[8] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li,</nobr></div> <div style="position:absolute;top:10263;left:105"><nobr>and Fei-Fei Li. Imagenet: A large-scale hierarchical image</nobr></div> <div style="position:absolute;top:10279;left:105"><nobr>database. In Proceedings of the 2009 IEEE Computer Soci-</nobr></div> <div style="position:absolute;top:10296;left:105"><nobr>ety Conference on Computer Vision and Pattern Recognition</nobr></div> <div style="position:absolute;top:10312;left:104"><nobr>(CVPR), pages 248–255, Miami, FL, 2009.</nobr></div> <div style="position:absolute;top:10331;left:82"><nobr>[9] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov,</nobr></div> <div style="position:absolute;top:10347;left:105"><nobr>Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner,</nobr></div> <div style="position:absolute;top:10364;left:105"><nobr>Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl-</nobr></div> <div style="position:absolute;top:10380;left:105"><nobr>vain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is</nobr></div> <div style="position:absolute;top:10397;left:105"><nobr>worth 16x16 words: Transformers for image recognition at</nobr></div> <div style="position:absolute;top:10413;left:105"><nobr>scale. In Proceedings of the 9th International Conference on</nobr></div> <div style="position:absolute;top:10430;left:105"><nobr>Learning Representations (ICLR), Virtual Event, 2021.</nobr></div> <div style="position:absolute;top:10448;left:75"><nobr>[10] Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao</nobr></div> <div style="position:absolute;top:10465;left:105"><nobr>Li, Zhicheng Yan, Jitendra Malik, and Christoph Feicht-</nobr></div> <div style="position:absolute;top:10481;left:105"><nobr>enhofer. Multiscale vision transformers. arXiv preprint</nobr></div> <div style="position:absolute;top:10498;left:105"><nobr>arXiv:2104.11227, 2021.</nobr></div> <div style="position:absolute;top:10516;left:75"><nobr>[11] Peng Gao, Jiasen Lu, Hongsheng Li, Roozbeh Mottaghi, and</nobr></div> <div style="position:absolute;top:10533;left:105"><nobr>Aniruddha Kembhavi. Container: Context aggregation net-</nobr></div> <div style="position:absolute;top:10549;left:105"><nobr>work. arXiv preprint arXiv:2106.01401, 2021.</nobr></div> <div style="position:absolute;top:10568;left:75"><nobr>[12] Meng-Hao Guo, Zheng-Ning Liu, Tai-Jiang Mu, and Shi-Min</nobr></div> <div style="position:absolute;top:10584;left:105"><nobr>Hu. Beyond self-attention: External attention using two linear</nobr></div> <div style="position:absolute;top:10601;left:105"><nobr>layers for visual tasks. arXiv preprint arXiv:2105.02358,</nobr></div> <div style="position:absolute;top:10617;left:105"><nobr>2021.</nobr></div> <div style="position:absolute;top:10636;left:75"><nobr>[13] Kai Han, An Xiao, Enhua Wu, Jianyuan Guo, Chunjing Xu,</nobr></div> <div style="position:absolute;top:10652;left:105"><nobr>and Yunhe Wang. Transformer in transformer. arXiv preprint</nobr></div> <div style="position:absolute;top:10669;left:105"><nobr>arXiv:2103.00112, 2021.</nobr></div> <div style="position:absolute;top:10687;left:75"><nobr>[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.</nobr></div> <div style="position:absolute;top:10704;left:105"><nobr>Deep residual learning for image recognition. In Proceedings</nobr></div> <div style="position:absolute;top:10720;left:105"><nobr>of the 2016 IEEE Conference on Computer Vision and Pattern</nobr></div> <div style="position:absolute;top:10737;left:105"><nobr>Recognition (CVPR), pages 770–778, Las Vegas, NV, 2016.</nobr></div> <div style="position:absolute;top:9793;left:463"><nobr>[15] Dan Hendrycks and Kevin Gimpel. Gaussian error linear</nobr></div> <div style="position:absolute;top:9809;left:493"><nobr>units (GELUS). arXiv preprint arXiv:1606.08415, 2016.</nobr></div> <div style="position:absolute;top:9826;left:463"><nobr>[16] Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk</nobr></div> <div style="position:absolute;top:9843;left:493"><nobr>Chun, Junsuk Choe, and Seong Joon Oh. Rethinking spatial</nobr></div> <div style="position:absolute;top:9859;left:493"><nobr>dimensions of vision transformers. arXiv: 2103.16302, 2021.</nobr></div> <div style="position:absolute;top:9876;left:463"><nobr>[17] Elad Hoffer, Tal Ben-Nun, Itay Hubara, Niv Giladi, Torsten</nobr></div> <div style="position:absolute;top:9892;left:493"><nobr>Hoefler, and Daniel Soudry. Augment your batch: Improving</nobr></div> <div style="position:absolute;top:9909;left:493"><nobr>generalization through instance repetition. In Proceedings</nobr></div> <div style="position:absolute;top:9925;left:493"><nobr>of the 2020 IEEE/CVF Conference on Computer Vision and</nobr></div> <div style="position:absolute;top:9942;left:493"><nobr>Pattern Recognition (CVPR), pages 8126–8135, Seattle, WA,</nobr></div> <div style="position:absolute;top:9958;left:493"><nobr>2020.</nobr></div> <div style="position:absolute;top:9975;left:463"><nobr>[18] Qibin Hou, Zihang Jiang, Li Yuan, Ming-Ming Cheng,</nobr></div> <div style="position:absolute;top:9991;left:493"><nobr>Shuicheng Yan, and Jiashi Feng. Vision permutator: A per-</nobr></div> <div style="position:absolute;top:10008;left:493"><nobr>mutable mlp-like architecture for visual recognition. arXiv</nobr></div> <div style="position:absolute;top:10024;left:493"><nobr>preprint arXiv:2106.12368, 2021.</nobr></div> <div style="position:absolute;top:10041;left:463"><nobr>[19] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry</nobr></div> <div style="position:absolute;top:10058;left:493"><nobr>Kalenichenko, Weijun Wang, Tobias Weyand, Marco An-</nobr></div> <div style="position:absolute;top:10074;left:493"><nobr>dreetto, and Hartwig Adam. Mobilenets: Efficient convolu-</nobr></div> <div style="position:absolute;top:10090;left:493"><nobr>tional neural networks for mobile vision applications. arXiv</nobr></div> <div style="position:absolute;top:10107;left:493"><nobr>preprint arXiv:1704.04861, 2017.</nobr></div> <div style="position:absolute;top:10124;left:463"><nobr>[20] Lukasz Kaiser, Aidan N. Gomez, and François Chollet. Depth-</nobr></div> <div style="position:absolute;top:10140;left:493"><nobr>wise separable convolutions for neural machine translation. In</nobr></div> <div style="position:absolute;top:10157;left:493"><nobr>Proceedings of the 6th International Conference on Learning</nobr></div> <div style="position:absolute;top:10173;left:493"><nobr>Representations (ICLR), Vancouver, Canada, 2018.</nobr></div> <div style="position:absolute;top:10190;left:463"><nobr>[21] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d</nobr></div> <div style="position:absolute;top:10206;left:493"><nobr>object representations for fine-grained categorization. In Pro-</nobr></div> <div style="position:absolute;top:10223;left:493"><nobr>ceedings of the IEEE international conference on computer</nobr></div> <div style="position:absolute;top:10239;left:493"><nobr>vision workshops, pages 554–561, 2013.</nobr></div> <div style="position:absolute;top:10256;left:463"><nobr>[22] Alex Krizhevsky. Learning multiple layers of features from</nobr></div> <div style="position:absolute;top:10273;left:493"><nobr>tiny images. 2009.</nobr></div> <div style="position:absolute;top:10289;left:463"><nobr>[23] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton.</nobr></div> <div style="position:absolute;top:10306;left:493"><nobr>Imagenet classification with deep convolutional neural net-</nobr></div> <div style="position:absolute;top:10322;left:493"><nobr>works. In Advances in Neural Information Processing Systems</nobr></div> <div style="position:absolute;top:10339;left:492"><nobr>(NIPS), pages 1106–1114, Lake Tahoe, NV, 2012.</nobr></div> <div style="position:absolute;top:10356;left:463"><nobr>[24] Gustav Larsson, Michael Maire, and Gregory Shakhnarovich.</nobr></div> <div style="position:absolute;top:10372;left:493"><nobr>Fractalnet: Ultra-deep neural networks without residuals. In</nobr></div> <div style="position:absolute;top:10389;left:493"><nobr>Proceedings of the 5th International Conference on Learning</nobr></div> <div style="position:absolute;top:10405;left:493"><nobr>Representations (ICLR), Toulon, France, 2017.</nobr></div> <div style="position:absolute;top:10422;left:463"><nobr>[25] Dongze Lian, Zehao Yu, Xing Sun, and Shenghua Gao. As-</nobr></div> <div style="position:absolute;top:10438;left:493"><nobr>mlp: An axial shifted mlp architecture for vision. arXiv</nobr></div> <div style="position:absolute;top:10455;left:493"><nobr>preprint arXiv:2107.08391, 2021.</nobr></div> <div style="position:absolute;top:10472;left:463"><nobr>[26] Ji Lin, Chuang Gan, and Song Han. TSM: temporal shift</nobr></div> <div style="position:absolute;top:10488;left:493"><nobr>module for efficient video understanding. In Proceedings of</nobr></div> <div style="position:absolute;top:10505;left:493"><nobr>the 2019 IEEE/CVF International Conference on Computer</nobr></div> <div style="position:absolute;top:10521;left:492"><nobr>Vision (ICCV), pages 7082–7092, Seoul, Korea, 2019.</nobr></div> <div style="position:absolute;top:10538;left:463"><nobr>[27] Hanxiao Liu, Zihang Dai, David R So, and Quoc V Le. Pay</nobr></div> <div style="position:absolute;top:10554;left:493"><nobr>attention to mlps. arXiv preprint arXiv:2105.08050, 2021.</nobr></div> <div style="position:absolute;top:10571;left:463"><nobr>[28] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng</nobr></div> <div style="position:absolute;top:10588;left:493"><nobr>Zhang, Stephen Lin, and Baining Guo. Swin transformer:</nobr></div> <div style="position:absolute;top:10604;left:493"><nobr>Hierarchical vision transformer using shifted windows. arXiv</nobr></div> <div style="position:absolute;top:10621;left:493"><nobr>preprint arXiv:2103.14030, 2021.</nobr></div> <div style="position:absolute;top:10637;left:463"><nobr>[29] Ilya Loshchilov and Frank Hutter. Decoupled weight decay</nobr></div> <div style="position:absolute;top:10654;left:493"><nobr>regularization. In Proceedings of the 7th International Con-</nobr></div> <div style="position:absolute;top:10670;left:493"><nobr>ference on Learning Representations (ICLR), New Orleans,</nobr></div> <div style="position:absolute;top:10687;left:493"><nobr>LA, 2019.</nobr></div> <div style="position:absolute;top:10704;left:463"><nobr>[30] Luke Melas-Kyriazi. Do you even need attention? a stack</nobr></div> <div style="position:absolute;top:10720;left:493"><nobr>of feed-forward layers does surprisingly well on imagenet.</nobr></div> <div style="position:absolute;top:10737;left:493"><nobr>arXiv preprint arXiv:2105.02723, 2021.</nobr></div> </span></font> <font size=3 color="#7f7f7f" face="Times"><span style="font-size:11px;font-family:Times;color:#7f7f7f"> <div style="position:absolute;top:10805;left:449"><nobr>305</nobr></div> </span></font> <div style="position:absolute;top:10867;left:0"><hr><table border=0 width=100%><tr><td bgcolor=eeeeee align=right><font face=arial,sans-serif><a name=10><b>Page 10</b></a></font></td></tr></table></div><font size=3 face="Times"><span style="font-size:11px;font-family:Times"> <div style="position:absolute;top:10981;left:75"><nobr>[31] Ilija Radosavovic, Raj Prateek Kosaraju, Ross B. Girshick,</nobr></div> <div style="position:absolute;top:10997;left:105"><nobr>Kaiming He, and Piotr Dollár. Designing network design</nobr></div> <div style="position:absolute;top:11014;left:105"><nobr>spaces. In Proceedings of the 2020 IEEE/CVF Conference</nobr></div> <div style="position:absolute;top:11030;left:105"><nobr>on Computer Vision and Pattern Recognition (CVPR), pages</nobr></div> <div style="position:absolute;top:11047;left:104"><nobr>10425–10433, Seattle, WA, 2020.</nobr></div> <div style="position:absolute;top:11064;left:75"><nobr>[32] Yongming Rao, Wenliang Zhao, Zheng Zhu, Jiwen Lu, and</nobr></div> <div style="position:absolute;top:11080;left:105"><nobr>Jie Zhou. Global filter networks for image classification.</nobr></div> <div style="position:absolute;top:11097;left:105"><nobr>arXiv preprint arXiv:2107.00645, 2021.</nobr></div> <div style="position:absolute;top:11113;left:75"><nobr>[33] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav</nobr></div> <div style="position:absolute;top:11130;left:105"><nobr>Gupta. Revisiting unreasonable effectiveness of data in deep</nobr></div> <div style="position:absolute;top:11146;left:105"><nobr>learning era. In Proceedings of the IEEE International Con-</nobr></div> <div style="position:absolute;top:11163;left:105"><nobr>ference on Computer Vision (ICCV), pages 843–852, Venice,</nobr></div> <div style="position:absolute;top:11179;left:105"><nobr>Italy, 2017.</nobr></div> <div style="position:absolute;top:11196;left:75"><nobr>[34] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe,</nobr></div> <div style="position:absolute;top:11213;left:105"><nobr>Jonathon Shlens, and Zbigniew Wojna. Rethinking the in-</nobr></div> <div style="position:absolute;top:11229;left:105"><nobr>ception architecture for computer vision. In Proceedings of</nobr></div> <div style="position:absolute;top:11246;left:105"><nobr>the 2016 IEEE Conference on Computer Vision and Pattern</nobr></div> <div style="position:absolute;top:11262;left:105"><nobr>Recognition (CVPR), pages 2818–2826, Las Vegas, NV, 2016.</nobr></div> <div style="position:absolute;top:11279;left:75"><nobr>[35] Mingxing Tan and Quoc V. Le. Efficientnet: Rethinking</nobr></div> <div style="position:absolute;top:11295;left:105"><nobr>model scaling for convolutional neural networks. In Pro-</nobr></div> <div style="position:absolute;top:11312;left:105"><nobr>ceedings of the 36th International Conference on Machine</nobr></div> <div style="position:absolute;top:11328;left:105"><nobr>Learning (ICML), pages 6105–6114, Long Beach, CA, 2019.</nobr></div> <div style="position:absolute;top:11345;left:75"><nobr>[36] Ilya Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas</nobr></div> <div style="position:absolute;top:11362;left:105"><nobr>Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, An-</nobr></div> <div style="position:absolute;top:11378;left:105"><nobr>dreas Steiner, Daniel Keysers, Jakob Uszkoreit, Mario Lucic,</nobr></div> <div style="position:absolute;top:11395;left:105"><nobr>and Alexey Dosovitskiy. MLP-Mixer: An all-MLP architec-</nobr></div> <div style="position:absolute;top:11411;left:105"><nobr>ture for vision. arXiv preprint arXiv:2105.01601, 2021.</nobr></div> <div style="position:absolute;top:11428;left:75"><nobr>[37] Hugo Touvron, Piotr Bojanowski, Mathilde Caron, Matthieu</nobr></div> <div style="position:absolute;top:11444;left:105"><nobr>Cord, Alaaeldin El-Nouby, Edouard Grave, Armand Joulin,</nobr></div> <div style="position:absolute;top:11461;left:105"><nobr>Gabriel Synnaeve, Jakob Verbeek, and Hervé Jégou. ResMLP:</nobr></div> <div style="position:absolute;top:11477;left:105"><nobr>Feedforward networks for image classification with data-</nobr></div> <div style="position:absolute;top:11494;left:105"><nobr>efficient training. arXiv preprint arXiv:2105.03404, 2021.</nobr></div> <div style="position:absolute;top:11511;left:75"><nobr>[38] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco</nobr></div> <div style="position:absolute;top:11527;left:105"><nobr>Massa, Alexandre Sablayrolles, and Hervé Jégou. Training</nobr></div> <div style="position:absolute;top:11544;left:105"><nobr>data-efficient image transformers & distillation through atten-</nobr></div> <div style="position:absolute;top:11560;left:105"><nobr>tion. arXiv preprint arXiv:2012.12877, 2020.</nobr></div> <div style="position:absolute;top:11577;left:75"><nobr>[39] Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles,</nobr></div> <div style="position:absolute;top:11593;left:105"><nobr>Gabriel Synnaeve, and Hervé Jégou. Going deeper with</nobr></div> <div style="position:absolute;top:11610;left:105"><nobr>image transformers. arXiv preprint arXiv:2103.17239, 2021.</nobr></div> <div style="position:absolute;top:11627;left:75"><nobr>[40] Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles,</nobr></div> <div style="position:absolute;top:11643;left:105"><nobr>Gabriel Synnaeve, and Hervé Jégou. Going deeper with</nobr></div> <div style="position:absolute;top:11660;left:105"><nobr>image transformers. arXiv preprint arXiv:2103.17239, 2021.</nobr></div> <div style="position:absolute;top:11677;left:75"><nobr>[41] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkor-</nobr></div> <div style="position:absolute;top:11693;left:105"><nobr>eit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia</nobr></div> <div style="position:absolute;top:11710;left:105"><nobr>Polosukhin. Attention is all you need. In Advances in Neural</nobr></div> <div style="position:absolute;top:11726;left:105"><nobr>Information Processing Systems (NIPS), pages 5998–6008,</nobr></div> <div style="position:absolute;top:11742;left:105"><nobr>Long Beach, CA, 2017.</nobr></div> <div style="position:absolute;top:11759;left:75"><nobr>[42] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao</nobr></div> <div style="position:absolute;top:11776;left:105"><nobr>Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyra-</nobr></div> <div style="position:absolute;top:11792;left:105"><nobr>mid vision transformer: A versatile backbone for dense predic-</nobr></div> <div style="position:absolute;top:11809;left:105"><nobr>tion without convolutions. arXiv preprint arXiv:2102.12122,</nobr></div> <div style="position:absolute;top:11825;left:105"><nobr>2021.</nobr></div> <div style="position:absolute;top:11842;left:75"><nobr>[43] Bichen Wu, Alvin Wan, Xiangyu Yue, Peter Jin, Sicheng</nobr></div> <div style="position:absolute;top:11859;left:105"><nobr>Zhao, Noah Golmant, Amir Gholaminejad, Joseph Gonzalez,</nobr></div> <div style="position:absolute;top:11875;left:105"><nobr>and Kurt Keutzer. Shift: A zero flop, zero parameter alter-</nobr></div> <div style="position:absolute;top:11892;left:105"><nobr>native to spatial convolutions. In Proceedings of the IEEE</nobr></div> <div style="position:absolute;top:11908;left:105"><nobr>Conference on Computer Vision and Pattern Recognition,</nobr></div> <div style="position:absolute;top:11924;left:105"><nobr>pages 9127–9135, 2018.</nobr></div> <div style="position:absolute;top:10981;left:463"><nobr>[44] Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang</nobr></div> <div style="position:absolute;top:10997;left:493"><nobr>Dai, Lu Yuan, and Lei Zhang. CvT: Introducing convolutions</nobr></div> <div style="position:absolute;top:11014;left:493"><nobr>to vision transformers. arXiv preprint arXiv:2103.15808,</nobr></div> <div style="position:absolute;top:11030;left:493"><nobr>2021.</nobr></div> <div style="position:absolute;top:11048;left:463"><nobr>[45] Tan Yu, Xu Li, Yunfeng Cai, Mingming Sun, and Ping Li.</nobr></div> <div style="position:absolute;top:11065;left:493"><nobr>Rethinking token-mixing mlp for mlp-based vision backbone.</nobr></div> <div style="position:absolute;top:11081;left:493"><nobr>In Proceedings of the 32th British Machine Vision Conference</nobr></div> <div style="position:absolute;top:11098;left:492"><nobr>(BMVC), 2021.</nobr></div> <div style="position:absolute;top:11115;left:463"><nobr>[46] Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi,</nobr></div> <div style="position:absolute;top:11132;left:493"><nobr>Zihang Jiang, Francis EH Tay, Jiashi Feng, and Shuicheng</nobr></div> <div style="position:absolute;top:11148;left:492"><nobr>Yan. Tokens-to-token ViT: Training vision transformers from</nobr></div> <div style="position:absolute;top:11165;left:493"><nobr>scratch on imagenet. arXiv preprint arXiv:2101.11986, 2021.</nobr></div> <div style="position:absolute;top:11183;left:463"><nobr>[47] Pengchuan Zhang, Xiyang Dai, Jianwei Yang, Bin Xiao, Lu</nobr></div> <div style="position:absolute;top:11199;left:492"><nobr>Yuan, Lei Zhang, and Jianfeng Gao. Multi-scale vision long-</nobr></div> <div style="position:absolute;top:11215;left:493"><nobr>former: A new vision transformer for high-resolution image</nobr></div> <div style="position:absolute;top:11232;left:493"><nobr>encoding. arXiv preprint arXiv:2103.15358, 2021.</nobr></div> <div style="position:absolute;top:11250;left:463"><nobr>[48] Zizhao Zhang, Han Zhang, Long Zhao, Ting Chen, and Tomas</nobr></div> <div style="position:absolute;top:11266;left:493"><nobr>Pfister. Aggregating nested transformers. arXiv preprint</nobr></div> <div style="position:absolute;top:11283;left:493"><nobr>arXiv:2105.12723, 2021.</nobr></div> </span></font> <font size=3 color="#7f7f7f" face="Times"><span style="font-size:11px;font-family:Times;color:#7f7f7f"> <div style="position:absolute;top:11993;left:449"><nobr>306</nobr></div> </span></font> <!-- t39304r92a24c30535e30273n170u26l0m0k0 --> </body> </html>