CINXE.COM

Chapter 3: High-level Language-Specific Analysis and Transformation - MLIR

<!doctype html><html lang=en-us><head><meta charset=utf-8><meta http-equiv=x-ua-compatible content="IE=edge"><meta name=viewport content="width=device-width,initial-scale=1,maximum-scale=1,user-scalable=no"><title>Chapter 3: High-level Language-Specific Analysis and Transformation - MLIR</title><meta name=description content="Multi-Level IR Compiler Framework"><meta name=generator content="Hugo 0.119.0"><link href=https://mlir.llvm.org/index.xml rel=alternate type=application/rss+xml><link rel=canonical href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-3/><link rel=stylesheet href=https://mlir.llvm.org/css/theme.css><script src=https://use.fontawesome.com/releases/v5.0.6/js/all.js></script> <link rel=stylesheet href=https://mlir.llvm.org/css/chroma.min.css><script src=https://cdn.jsdelivr.net/npm/jquery@3.3.1/dist/jquery.min.js></script> <script src=https://cdn.jsdelivr.net/npm/jquery.easing@1.4.1/jquery.easing.min.js></script> <script src=https://mlir.llvm.org/js/bundle.js></script> <script type=text/javascript src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> <script type=text/x-mathjax-config> MathJax.Hub.Config({ tex2jax: { inlineMath: [['$', '$'] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ] } }); </script><link rel=apple-touch-icon sizes=180x180 href="/apple-touch-icon.png?v=1"><link rel=icon type=image/png sizes=32x32 href="/favicon-32x32.png?v=1"><link rel=icon type=image/png sizes=16x16 href="/favicon-16x16.png?v=1"><link rel=manifest href="/site.webmanifest?v=1"><link rel=mask-icon href="/safari-pinned-tab.svg?v=1" color=#3775e0><link rel="shortcut icon" href="/favicon.ico?v=1"><meta name=msapplication-TileColor content="#2d89ef"><meta name=theme-color content="#ffffff"><link rel=icon href=/favicon.svg type=image/svg+xml sizes=any><style>:root{}</style></head><body><div class=container><header><h1><div><img src=https://mlir.llvm.org//mlir-logo.png width=40px align=absmiddle> MLIR</div></h1><p class=description>Multi-Level IR Compiler Framework</p></header><div class=global-menu><nav><ul><li class=parent><a href>Community<i class="fas fa-angle-right"></i></a><ul class=sub-menu><li class=child><a href=https://llvm.discourse.group/c/mlir/31>Forums</a></li><li class=child><a href=https://discord.gg/xS7Z362>Chat</a></li></ul></li><li><a href=/getting_started/Debugging/>Debugging Tips</a></li><li><a href=/getting_started/Faq/>FAQ</a></li><li class=parent><a href=https://github.com/llvm/llvm-project/tree/main/mlir>Source<i class="fas fa-angle-right"></i></a><ul class=sub-menu><li class=child><a href=/doxygen/>Doxygen</a></li><li class=child><a href=https://github.com/llvm/llvm-project/tree/main/mlir>GitHub</a></li></ul></li><li><a href="https://bugs.llvm.org/buglist.cgi?bug_status=__open__&amp;list_id=177877&amp;order=changeddate%20DESC%2Cpriority%2Cbug_severity&amp;product=MLIR&amp;query_format=specific">Bugs</a></li><li><a href=https://github.com/llvm/mlir-www/tree/main/website/static/LogoAssets>Logo Assets</a></li><li><a href=https://www.youtube.com/MLIRCompiler>Youtube Channel</a></li></ul></nav></div><div class=content-container><main><h1>Chapter 3: High-level Language-Specific Analysis and Transformation</h1><p><nav id=TableOfContents><ul><li><a href=#optimize-transpose-using-c-style-pattern-match-and-rewrite>Optimize Transpose using C++ style pattern-match and rewrite</a></li><li><a href=#optimize-reshapes-using-drr>Optimize Reshapes using DRR</a></li></ul></nav><p>Creating a dialect that closely represents the semantics of an input language enables analyses, transformations and optimizations in MLIR that require high-level language information and are generally performed on the language AST. For example, <code>clang</code> has a fairly <a href=https://clang.llvm.org/doxygen/classclang_1_1TreeTransform.html>heavy mechanism</a> for performing template instantiation in C++.</p><p>We divide compiler transformations into two categories: local and global. In this chapter, we focus on how to leverage the Toy Dialect and its high-level semantics to perform local pattern-match transformations that would be difficult in LLVM. For this, we use MLIR&rsquo;s <a href=/docs/PatternRewriter/>Generic DAG Rewriter</a>.</p><p>There are two methods that can be used to implement pattern-match transformations: 1. Imperative, C++ pattern-match and rewrite 2. Declarative, rule-based pattern-match and rewrite using table-driven <a href=/docs/DeclarativeRewrites/>Declarative Rewrite Rules</a> (DRR). Note that the use of DRR requires that the operations be defined using ODS, as described in <a href=/docs/Tutorials/Toy/Ch-2/>Chapter 2</a>.</p><h2 id=optimize-transpose-using-c-style-pattern-match-and-rewrite>Optimize Transpose using C++ style pattern-match and rewrite&nbsp;<a class=headline-hash href=#optimize-transpose-using-c-style-pattern-match-and-rewrite>¶</a></h2><p>Let&rsquo;s start with a simple pattern and try to eliminate a sequence of two transposes that cancel out: <code>transpose(transpose(X)) -> X</code>. Here is the corresponding Toy example:</p><pre tabindex=0><code class=language-toy data-lang=toy>def transpose_transpose(x) { return transpose(transpose(x)); } </code></pre><p>Which corresponds to the following IR:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>toy<span class=p>.</span><span class=kt>func</span> <span class=nf>@transpose_transpose</span><span class=p>(</span><span class=nv>%arg0</span><span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nv>%0</span> <span class=p>=</span> toy<span class=p>.</span>transpose<span class=p>(</span><span class=nv>%arg0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;)</span> to <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> </span></span><span class=line><span class=cl> <span class=nv>%1</span> <span class=p>=</span> toy<span class=p>.</span>transpose<span class=p>(</span><span class=nv>%0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;)</span> to <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> </span></span><span class=line><span class=cl> toy<span class=p>.</span><span class=kt>return</span> <span class=nv>%1</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>This is a good example of a transformation that is trivial to match on the Toy IR but that would be quite hard for LLVM to figure. For example, today Clang can&rsquo;t optimize away the temporary array, and the computation with the naive transpose is expressed with these loops:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=cp>#define N 100 </span></span></span><span class=line><span class=cl><span class=cp>#define M 100 </span></span></span><span class=line><span class=cl><span class=cp></span> </span></span><span class=line><span class=cl><span class=kt>void</span> <span class=nf>sink</span><span class=p>(</span><span class=kt>void</span> <span class=o>*</span><span class=p>);</span> </span></span><span class=line><span class=cl><span class=kt>void</span> <span class=nf>double_transpose</span><span class=p>(</span><span class=kt>int</span> <span class=n>A</span><span class=p>[</span><span class=n>N</span><span class=p>][</span><span class=n>M</span><span class=p>])</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=kt>int</span> <span class=n>B</span><span class=p>[</span><span class=n>M</span><span class=p>][</span><span class=n>N</span><span class=p>];</span> </span></span><span class=line><span class=cl> <span class=k>for</span><span class=p>(</span><span class=kt>int</span> <span class=n>i</span> <span class=o>=</span> <span class=mi>0</span><span class=p>;</span> <span class=n>i</span> <span class=o>&lt;</span> <span class=n>N</span><span class=p>;</span> <span class=o>++</span><span class=n>i</span><span class=p>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=k>for</span><span class=p>(</span><span class=kt>int</span> <span class=n>j</span> <span class=o>=</span> <span class=mi>0</span><span class=p>;</span> <span class=n>j</span> <span class=o>&lt;</span> <span class=n>M</span><span class=p>;</span> <span class=o>++</span><span class=n>j</span><span class=p>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=n>B</span><span class=p>[</span><span class=n>j</span><span class=p>][</span><span class=n>i</span><span class=p>]</span> <span class=o>=</span> <span class=n>A</span><span class=p>[</span><span class=n>i</span><span class=p>][</span><span class=n>j</span><span class=p>];</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=k>for</span><span class=p>(</span><span class=kt>int</span> <span class=n>i</span> <span class=o>=</span> <span class=mi>0</span><span class=p>;</span> <span class=n>i</span> <span class=o>&lt;</span> <span class=n>N</span><span class=p>;</span> <span class=o>++</span><span class=n>i</span><span class=p>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=k>for</span><span class=p>(</span><span class=kt>int</span> <span class=n>j</span> <span class=o>=</span> <span class=mi>0</span><span class=p>;</span> <span class=n>j</span> <span class=o>&lt;</span> <span class=n>M</span><span class=p>;</span> <span class=o>++</span><span class=n>j</span><span class=p>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=n>A</span><span class=p>[</span><span class=n>i</span><span class=p>][</span><span class=n>j</span><span class=p>]</span> <span class=o>=</span> <span class=n>B</span><span class=p>[</span><span class=n>j</span><span class=p>][</span><span class=n>i</span><span class=p>];</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=n>sink</span><span class=p>(</span><span class=n>A</span><span class=p>);</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>For a simple C++ approach to rewrite, involving matching a tree-like pattern in the IR and replacing it with a different set of operations, we can plug into the MLIR <code>Canonicalizer</code> pass by implementing a <code>RewritePattern</code>:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=c1>/// Fold transpose(transpose(x)) -&gt; x </span></span></span><span class=line><span class=cl><span class=c1></span><span class=k>struct</span> <span class=nc>SimplifyRedundantTranspose</span> <span class=o>:</span> <span class=k>public</span> <span class=n>mlir</span><span class=o>::</span><span class=n>OpRewritePattern</span><span class=o>&lt;</span><span class=n>TransposeOp</span><span class=o>&gt;</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=c1>/// We register this pattern to match every toy.transpose in the IR. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// The &#34;benefit&#34; is used by the framework to order the patterns and process </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// them in order of profitability. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=n>SimplifyRedundantTranspose</span><span class=p>(</span><span class=n>mlir</span><span class=o>::</span><span class=n>MLIRContext</span> <span class=o>*</span><span class=n>context</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=o>:</span> <span class=n>OpRewritePattern</span><span class=o>&lt;</span><span class=n>TransposeOp</span><span class=o>&gt;</span><span class=p>(</span><span class=n>context</span><span class=p>,</span> <span class=cm>/*benefit=*/</span><span class=mi>1</span><span class=p>)</span> <span class=p>{}</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c1>/// This method is attempting to match a pattern and rewrite it. The rewriter </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// argument is the orchestrator of the sequence of rewrites. It is expected </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// to interact with it to perform any changes to the IR from here. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=n>llvm</span><span class=o>::</span><span class=n>LogicalResult</span> </span></span><span class=line><span class=cl> <span class=n>matchAndRewrite</span><span class=p>(</span><span class=n>TransposeOp</span> <span class=n>op</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=n>mlir</span><span class=o>::</span><span class=n>PatternRewriter</span> <span class=o>&amp;</span><span class=n>rewriter</span><span class=p>)</span> <span class=k>const</span> <span class=k>override</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=c1>// Look through the input of the current transpose. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=n>mlir</span><span class=o>::</span><span class=n>Value</span> <span class=n>transposeInput</span> <span class=o>=</span> <span class=n>op</span><span class=p>.</span><span class=n>getOperand</span><span class=p>();</span> </span></span><span class=line><span class=cl> <span class=n>TransposeOp</span> <span class=n>transposeInputOp</span> <span class=o>=</span> <span class=n>transposeInput</span><span class=p>.</span><span class=n>getDefiningOp</span><span class=o>&lt;</span><span class=n>TransposeOp</span><span class=o>&gt;</span><span class=p>();</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c1>// Input defined by another transpose? If not, no match. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=k>if</span> <span class=p>(</span><span class=o>!</span><span class=n>transposeInputOp</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=k>return</span> <span class=n>failure</span><span class=p>();</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c1>// Otherwise, we have a redundant transpose. Use the rewriter. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=n>rewriter</span><span class=p>.</span><span class=n>replaceOp</span><span class=p>(</span><span class=n>op</span><span class=p>,</span> <span class=p>{</span><span class=n>transposeInputOp</span><span class=p>.</span><span class=n>getOperand</span><span class=p>()});</span> </span></span><span class=line><span class=cl> <span class=k>return</span> <span class=nf>success</span><span class=p>();</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl><span class=p>};</span> </span></span></code></pre></div><p>The implementation of this rewriter is in <code>ToyCombine.cpp</code>. The <a href=/docs/Canonicalization/>canonicalization pass</a> applies transformations defined by operations in a greedy, iterative manner. To ensure that the canonicalization pass applies our new transform, we set <a href=/docs/DefiningDialects/Operations/#hascanonicalizer>hasCanonicalizer = 1</a> and register the pattern with the canonicalization framework.</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=c1>// Register our patterns for rewrite by the Canonicalization framework. </span></span></span><span class=line><span class=cl><span class=c1></span><span class=kt>void</span> <span class=n>TransposeOp</span><span class=o>::</span><span class=n>getCanonicalizationPatterns</span><span class=p>(</span> </span></span><span class=line><span class=cl> <span class=n>RewritePatternSet</span> <span class=o>&amp;</span><span class=n>results</span><span class=p>,</span> <span class=n>MLIRContext</span> <span class=o>*</span><span class=n>context</span><span class=p>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=n>results</span><span class=p>.</span><span class=n>add</span><span class=o>&lt;</span><span class=n>SimplifyRedundantTranspose</span><span class=o>&gt;</span><span class=p>(</span><span class=n>context</span><span class=p>);</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>We also need to update our main file, <code>toyc.cpp</code>, to add an optimization pipeline. In MLIR, the optimizations are run through a <code>PassManager</code> in a similar way to LLVM:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl> <span class=n>mlir</span><span class=o>::</span><span class=n>PassManager</span> <span class=n>pm</span><span class=p>(</span><span class=n>module</span><span class=o>-&gt;</span><span class=n>getName</span><span class=p>());</span> </span></span><span class=line><span class=cl> <span class=n>pm</span><span class=p>.</span><span class=n>addNestedPass</span><span class=o>&lt;</span><span class=n>mlir</span><span class=o>::</span><span class=n>toy</span><span class=o>::</span><span class=n>FuncOp</span><span class=o>&gt;</span><span class=p>(</span><span class=n>mlir</span><span class=o>::</span><span class=n>createCanonicalizerPass</span><span class=p>());</span> </span></span></code></pre></div><p>Finally, we can run <code>toyc-ch3 test/Examples/Toy/Ch3/transpose_transpose.toy -emit=mlir -opt</code> and observe our pattern in action:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>toy<span class=p>.</span><span class=kt>func</span> <span class=nf>@transpose_transpose</span><span class=p>(</span><span class=nv>%arg0</span><span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nv>%0</span> <span class=p>=</span> toy<span class=p>.</span>transpose<span class=p>(</span><span class=nv>%arg0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;)</span> to <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> </span></span><span class=line><span class=cl> toy<span class=p>.</span><span class=kt>return</span> <span class=nv>%arg0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>As expected, we now directly return the function argument, bypassing any transpose operation. However, one of the transposes still hasn&rsquo;t been eliminated. That is not ideal! What happened is that our pattern replaced the last transform with the function input and left behind the now dead transpose input. The Canonicalizer knows to clean up dead operations; however, MLIR conservatively assumes that operations may have side-effects. We can fix this by adding a new trait, <code>Pure</code>, to our <code>TransposeOp</code>:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-tablegen data-lang=tablegen><span class=line><span class=cl><span class=k>def</span> <span class=nv>TransposeOp</span> <span class=p>:</span> <span class=nv>Toy_Op</span><span class=p>&lt;</span><span class=s>&#34;transpose&#34;</span><span class=p>,</span> <span class=p>[</span><span class=nv>Pure</span><span class=p>]&gt;</span> <span class=p>{...}</span> </span></span></code></pre></div><p>Let&rsquo;s retry now <code>toyc-ch3 test/transpose_transpose.toy -emit=mlir -opt</code>:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>toy<span class=p>.</span><span class=kt>func</span> <span class=nf>@transpose_transpose</span><span class=p>(</span><span class=nv>%arg0</span><span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=p>{</span> </span></span><span class=line><span class=cl> toy<span class=p>.</span><span class=kt>return</span> <span class=nv>%arg0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>Perfect! No <code>transpose</code> operation is left - the code is optimal.</p><p>In the next section, we use DRR for pattern match optimizations associated with the Reshape op.</p><h2 id=optimize-reshapes-using-drr>Optimize Reshapes using DRR&nbsp;<a class=headline-hash href=#optimize-reshapes-using-drr>¶</a></h2><p>Declarative, rule-based pattern-match and rewrite (DRR) is an operation DAG-based declarative rewriter that provides a table-based syntax for pattern-match and rewrite rules:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-tablegen data-lang=tablegen><span class=line><span class=cl><span class=k>class</span> <span class=nv>Pattern</span><span class=p>&lt;</span> </span></span><span class=line><span class=cl> <span class=k>dag</span> <span class=nv>sourcePattern</span><span class=p>,</span> <span class=k>list</span><span class=p>&lt;</span><span class=k>dag</span><span class=p>&gt;</span> <span class=nv>resultPatterns</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=k>list</span><span class=p>&lt;</span><span class=k>dag</span><span class=p>&gt;</span> <span class=nv>additionalConstraints</span> <span class=p>=</span> <span class=p>[],</span> </span></span><span class=line><span class=cl> <span class=k>dag</span> <span class=nv>benefitsAdded</span> <span class=p>=</span> <span class=p>(</span><span class=nv>addBenefit</span> <span class=m>0</span><span class=p>)&gt;;</span> </span></span></code></pre></div><p>A redundant reshape optimization similar to SimplifyRedundantTranspose can be expressed more simply using DRR as follows:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-tablegen data-lang=tablegen><span class=line><span class=cl><span class=c>// Reshape(Reshape(x)) = Reshape(x) </span></span></span><span class=line><span class=cl><span class=c></span><span class=k>def</span> <span class=nv>ReshapeReshapeOptPattern</span> <span class=p>:</span> <span class=nv>Pat</span><span class=p>&lt;(</span><span class=nv>ReshapeOp</span><span class=p>(</span><span class=nv>ReshapeOp</span> <span class=nv>$arg</span><span class=p>)),</span> </span></span><span class=line><span class=cl> <span class=p>(</span><span class=nv>ReshapeOp</span> <span class=nv>$arg</span><span class=p>)&gt;;</span> </span></span></code></pre></div><p>The automatically generated C++ code corresponding to each of the DRR patterns can be found under <code>path/to/BUILD/tools/mlir/examples/toy/Ch3/ToyCombine.inc</code>.</p><p>DRR also provides a method for adding argument constraints when the transformation is conditional on some properties of the arguments and results. An example is a transformation that eliminates reshapes when they are redundant, i.e. when the input and output shapes are identical.</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-tablegen data-lang=tablegen><span class=line><span class=cl><span class=k>def</span> <span class=nv>TypesAreIdentical</span> <span class=p>:</span> <span class=nv>Constraint</span><span class=p>&lt;</span><span class=nv>CPred</span><span class=p>&lt;</span><span class=s>&#34;$0.getType() == $1.getType()&#34;</span><span class=p>&gt;&gt;;</span> </span></span><span class=line><span class=cl><span class=k>def</span> <span class=nv>RedundantReshapeOptPattern</span> <span class=p>:</span> <span class=nv>Pat</span><span class=p>&lt;</span> </span></span><span class=line><span class=cl> <span class=p>(</span><span class=nv>ReshapeOp</span><span class=p>:</span><span class=nv>$res</span> <span class=nv>$arg</span><span class=p>),</span> <span class=p>(</span><span class=nv>replaceWithValue</span> <span class=nv>$arg</span><span class=p>),</span> </span></span><span class=line><span class=cl> <span class=p>[(</span><span class=nv>TypesAreIdentical</span> <span class=nv>$res</span><span class=p>,</span> <span class=nv>$arg</span><span class=p>)]&gt;;</span> </span></span></code></pre></div><p>Some optimizations may require additional transformations on instruction arguments. This is achieved using NativeCodeCall, which allows for more complex transformations either by calling into a C++ helper function or by using inline C++. An example of such an optimization is FoldConstantReshape, where we optimize Reshape of a constant value by reshaping the constant in place and eliminating the reshape operation.</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-tablegen data-lang=tablegen><span class=line><span class=cl><span class=k>def</span> <span class=nv>ReshapeConstant</span> <span class=p>:</span> <span class=nv>NativeCodeCall</span><span class=p>&lt;</span><span class=s>&#34;$0.reshape(($1.getType()).cast&lt;ShapedType&gt;())&#34;</span><span class=p>&gt;;</span> </span></span><span class=line><span class=cl><span class=k>def</span> <span class=nv>FoldConstantReshapeOptPattern</span> <span class=p>:</span> <span class=nv>Pat</span><span class=p>&lt;</span> </span></span><span class=line><span class=cl> <span class=p>(</span><span class=nv>ReshapeOp</span><span class=p>:</span><span class=nv>$res</span> <span class=p>(</span><span class=nv>ConstantOp</span> <span class=nv>$arg</span><span class=p>)),</span> </span></span><span class=line><span class=cl> <span class=p>(</span><span class=nv>ConstantOp</span> <span class=p>(</span><span class=nv>ReshapeConstant</span> <span class=nv>$arg</span><span class=p>,</span> <span class=nv>$res</span><span class=p>))&gt;;</span> </span></span></code></pre></div><p>We demonstrate these reshape optimizations using the following trivial_reshape.toy program:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=n>def</span> <span class=nf>main</span><span class=p>()</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=n>var</span> <span class=n>a</span><span class=o>&lt;</span><span class=mi>2</span><span class=p>,</span><span class=mi>1</span><span class=o>&gt;</span> <span class=o>=</span> <span class=p>[</span><span class=mi>1</span><span class=p>,</span> <span class=mi>2</span><span class=p>];</span> </span></span><span class=line><span class=cl> <span class=n>var</span> <span class=n>b</span><span class=o>&lt;</span><span class=mi>2</span><span class=p>,</span><span class=mi>1</span><span class=o>&gt;</span> <span class=o>=</span> <span class=n>a</span><span class=p>;</span> </span></span><span class=line><span class=cl> <span class=n>var</span> <span class=n>c</span><span class=o>&lt;</span><span class=mi>2</span><span class=p>,</span><span class=mi>1</span><span class=o>&gt;</span> <span class=o>=</span> <span class=n>b</span><span class=p>;</span> </span></span><span class=line><span class=cl> <span class=n>print</span><span class=p>(</span><span class=n>c</span><span class=p>);</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>module <span class=p>{</span> </span></span><span class=line><span class=cl> toy<span class=p>.</span><span class=kt>func</span> <span class=nf>@main</span><span class=p>()</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nv>%0</span> <span class=p>=</span> toy<span class=p>.</span><span class=kt>constant</span> dense<span class=p>&lt;[</span><span class=m>1.000000e+00</span><span class=p>,</span> <span class=m>2.000000e+00</span><span class=p>]&gt;</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x</span><span class=k>f64</span><span class=p>&gt;</span> </span></span><span class=line><span class=cl> <span class=nv>%1</span> <span class=p>=</span> toy<span class=p>.</span>reshape<span class=p>(</span><span class=nv>%0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x</span><span class=k>f64</span><span class=p>&gt;)</span> to <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x1x</span><span class=k>f64</span><span class=p>&gt;</span> </span></span><span class=line><span class=cl> <span class=nv>%2</span> <span class=p>=</span> toy<span class=p>.</span>reshape<span class=p>(</span><span class=nv>%1</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x1x</span><span class=k>f64</span><span class=p>&gt;)</span> to <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x1x</span><span class=k>f64</span><span class=p>&gt;</span> </span></span><span class=line><span class=cl> <span class=nv>%3</span> <span class=p>=</span> toy<span class=p>.</span>reshape<span class=p>(</span><span class=nv>%2</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x1x</span><span class=k>f64</span><span class=p>&gt;)</span> to <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x1x</span><span class=k>f64</span><span class=p>&gt;</span> </span></span><span class=line><span class=cl> toy<span class=p>.</span>print <span class=nv>%3</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x1x</span><span class=k>f64</span><span class=p>&gt;</span> </span></span><span class=line><span class=cl> toy<span class=p>.</span><span class=kt>return</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>We can try to run <code>toyc-ch3 test/Examples/Toy/Ch3/trivial_reshape.toy -emit=mlir -opt</code> and observe our pattern in action:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>module <span class=p>{</span> </span></span><span class=line><span class=cl> toy<span class=p>.</span><span class=kt>func</span> <span class=nf>@main</span><span class=p>()</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nv>%0</span> <span class=p>=</span> toy<span class=p>.</span><span class=kt>constant</span> dense<span class=p>&lt;[[</span><span class=m>1.000000e+00</span><span class=p>],</span> <span class=p>[</span><span class=m>2.000000e+00</span><span class=p>]]&gt;</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x1x</span><span class=k>f64</span><span class=p>&gt;</span> </span></span><span class=line><span class=cl> toy<span class=p>.</span>print <span class=nv>%0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x1x</span><span class=k>f64</span><span class=p>&gt;</span> </span></span><span class=line><span class=cl> toy<span class=p>.</span><span class=kt>return</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>As expected, no reshape operations remain after canonicalization.</p><p>Further details on the declarative rewrite method can be found at <a href=/docs/DeclarativeRewrites/>Table-driven Declarative Rewrite Rule (DRR)</a>.</p><p>In this chapter, we saw how to use certain core transformations through always available hooks. In the <a href=/docs/Tutorials/Toy/Ch-4/>next chapter</a>, we will see how to use generic solutions that scale better through Interfaces.</p><div class=edit-meta><br></div><nav class=pagination><a class="nav nav-prev" href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-2/ title="Chapter 2: Emitting Basic MLIR"><i class="fas fa-arrow-left" aria-hidden=true></i> Prev - Chapter 2: Emitting Basic MLIR</a> <a class="nav nav-next" href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-4/ title="Chapter 4: Enabling Generic Transformation with Interfaces">Next - Chapter 4: Enabling Generic Transformation with Interfaces <i class="fas fa-arrow-right" aria-hidden=true></i></a></nav><footer><p class=powered>Powered by <a href=https://gohugo.io>Hugo</a>. Theme by <a href=https://themes.gohugo.io/hugo-theme-techdoc/>TechDoc</a>. Designed by <a href=https://github.com/thingsym/hugo-theme-techdoc>Thingsym</a>.</p></footer></main><div class=sidebar><nav class=slide-menu><ul><li><a href=https://mlir.llvm.org/>Home</a></li><li><a href=https://mlir.llvm.org/users/>Users of MLIR</a></li><li><a href=https://mlir.llvm.org/pubs/>MLIR Related Publications</a></li><li><a href=https://mlir.llvm.org/talks/>Talks</a></li><li><a href=https://mlir.llvm.org/deprecation/>Deprecations & Current Refactoring</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/getting_started/>Getting Started<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/getting_started/ReportingIssues/>Reporting Issues</a></li><li><a href=https://mlir.llvm.org/getting_started/Debugging/>Debugging Tips</a></li><li><a href=https://mlir.llvm.org/getting_started/Faq/>FAQ</a></li><li><a href=https://mlir.llvm.org/getting_started/Contributing/>How to Contribute</a></li><li><a href=https://mlir.llvm.org/getting_started/DeveloperGuide/>Developer Guide</a></li><li><a href=https://mlir.llvm.org/getting_started/openprojects/>Open Projects</a></li><li><a href=https://mlir.llvm.org/getting_started/Glossary/>Glossary</a></li><li><a href=https://mlir.llvm.org/getting_started/TestingGuide/>Testing Guide</a></li></ul></li><li class="parent has-sub-menu"><a href=https://mlir.llvm.org/docs/>Code Documentation<span class="mark opened">-</span></a><ul class=sub-menu><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Bindings/>Bindings<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Bindings/Python/>MLIR Python Bindings</a></li></ul></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Tools/>Tools<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tools/MLIRLSP/>MLIR : Language Server Protocol</a></li><li><a href=https://mlir.llvm.org/docs/Tools/mlir-reduce/>MLIR Reduce</a></li><li><a href=https://mlir.llvm.org/docs/Tools/mlir-rewrite/>mlir-rewrite</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/QuantPasses/></a></li><li><a href=https://mlir.llvm.org/docs/ActionTracing/>Action: Tracing and Debugging MLIR-based Compilers</a></li><li><a href=https://mlir.llvm.org/docs/BufferDeallocationInternals/>Buffer Deallocation - Internals</a></li><li><a href=https://mlir.llvm.org/docs/Bufferization/>Bufferization</a></li><li><a href=https://mlir.llvm.org/docs/DataLayout/>Data Layout Modeling</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/DefiningDialects/>Defining Dialects<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/DefiningDialects/Constraints/>Constraints</a></li><li><a href=https://mlir.llvm.org/docs/DefiningDialects/AttributesAndTypes/>Defining Dialect Attributes and Types</a></li><li><a href=https://mlir.llvm.org/docs/DefiningDialects/Operations/>Operation Definition Specification (ODS)</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Diagnostics/>Diagnostic Infrastructure</a></li><li><a href=https://mlir.llvm.org/docs/DialectConversion/>Dialect Conversion</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Dialects/>Dialects<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Dialects/DLTITransformOps/></a></li><li><a href=https://mlir.llvm.org/docs/Dialects/OpenACCDialect/>'acc' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Affine/>'affine' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/AMDGPU/>'amdgpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/AMX/>'amx' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArithOps/>'arith' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArmNeon/>'arm_neon' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArmSVE/>'arm_sve' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArmSME/>'ArmSME' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/AsyncDialect/>'async' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/BufferizationOps/>'bufferization' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ControlFlowDialect/>'cf' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ComplexOps/>'complex' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/DLTIDialect/>'dlti' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/EmitC/>'emitc' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Func/>'func' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/GPU/>'gpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/IndexOps/>'index' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/IRDL/>'irdl' Dialect</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Dialects/Linalg/>'linalg' Dialect<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Dialects/Linalg/OpDSL/>Linalg OpDSL</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Dialects/LLVM/>'llvm' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MathOps/>'math' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MemRef/>'memref' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Mesh/>'mesh' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MLProgramOps/>'ml_program' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MPI/>'mpi' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/NVGPU/>'nvgpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/NVVMDialect/>'nvvm' Dialect</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Dialects/OpenMPDialect/>'omp' Dialect<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Dialects/OpenMPDialect/ODS/>ODS Documentation</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Dialects/PDLInterpOps/>'pdl_interp' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/PDLOps/>'pdl' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/PolynomialDialect/>'polynomial' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/PtrOps/>'ptr' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/QuantDialect/>'quant' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ROCDLDialect/>'rocdl' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/SCFDialect/>'scf' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ShapeDialect/>'shape' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/SparseTensorOps/>'sparse_tensor' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/TensorOps/>'tensor' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/UBOps/>'ub' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/VCIXDialect/>'vcix' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Vector/>'vector' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/X86Vector/>'x86vector' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/XeGPU/>'xegpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Builtin/>Builtin Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MatchOpInterfaces/>OpInterface definitions</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/SPIR-V/>SPIR-V Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/TOSA/>Tensor Operator Set Architecture (TOSA) Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Transform/>Transform Dialect</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Interfaces/>Interfaces</a></li><li><a href=https://mlir.llvm.org/docs/TargetLLVMIR/>LLVM IR Target</a></li><li><a href=https://mlir.llvm.org/docs/BytecodeFormat/>MLIR Bytecode Format</a></li><li><a href=https://mlir.llvm.org/docs/CAPI/>MLIR C API</a></li><li><a href=https://mlir.llvm.org/docs/LangRef/>MLIR Language Reference</a></li><li><a href=https://mlir.llvm.org/docs/ReleaseNotes/>MLIR Release Notes</a></li><li><a href=https://mlir.llvm.org/docs/Canonicalization/>Operation Canonicalization</a></li><li><a href=https://mlir.llvm.org/docs/OwnershipBasedBufferDeallocation/>Ownership-based Buffer Deallocation</a></li><li><a href=https://mlir.llvm.org/docs/PassManagement/>Pass Infrastructure</a></li><li><a href=https://mlir.llvm.org/docs/Passes/>Passes</a></li><li><a href=https://mlir.llvm.org/docs/PatternRewriter/>Pattern Rewriting : Generic DAG-to-DAG Rewriting</a></li><li><a href=https://mlir.llvm.org/docs/PDLL/>PDLL - PDL Language</a></li><li><a href=https://mlir.llvm.org/docs/Quantization/>Quantization</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Rationale/>Rationale<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Rationale/RationaleGenericDAGRewriter/>Generic DAG Rewriter Infrastructure Rationale</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/RationaleLinalgDialect/>Linalg Dialect Rationale: The Case For Compiler-Friendly Custom Operations</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/Rationale/>MLIR Rationale</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/MLIRForGraphAlgorithms/>MLIR: Incremental Application to Graph Algorithms in ML Frameworks</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/RationaleSimplifiedPolyhedralForm/>MLIR: The case for a simplified polyhedral form</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/SideEffectsAndSpeculation/>Side Effects & Speculation</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/UsageOfConst/>Usage of 'const' in MLIR, for core IR types</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/ShapeInference/>Shape Inference</a></li><li><a href=https://mlir.llvm.org/docs/SPIRVToLLVMDialectConversion/>SPIR-V Dialect to LLVM Dialect conversion manual</a></li><li><a href=https://mlir.llvm.org/docs/SymbolsAndSymbolTables/>Symbols and Symbol Tables</a></li><li><a href=https://mlir.llvm.org/docs/DeclarativeRewrites/>Table-driven Declarative Rewrite Rule (DRR)</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Traits/>Traits<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Traits/Broadcastable/>The `Broadcastable` Trait</a></li></ul></li><li class="parent has-sub-menu"><a href=https://mlir.llvm.org/docs/Tutorials/>Tutorials<span class="mark opened">-</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tutorials/CreatingADialect/>Creating a Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/QuickstartRewrites/>Quickstart tutorial to adding MLIR graph rewrite</a></li><li class="parent has-sub-menu"><a href=https://mlir.llvm.org/docs/Tutorials/Toy/>Toy Tutorial<span class="mark opened">-</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-1/>Chapter 1: Toy Language and AST</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-2/>Chapter 2: Emitting Basic MLIR</a></li><li class=active><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-3/>Chapter 3: High-level Language-Specific Analysis and Transformation</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-4/>Chapter 4: Enabling Generic Transformation with Interfaces</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-5/>Chapter 5: Partial Lowering to Lower-Level Dialects for Optimization</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-6/>Chapter 6: Lowering to LLVM and CodeGeneration</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-7/>Chapter 7: Adding a Composite Type to Toy</a></li></ul></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Tutorials/transform/>Transform Dialect Tutorial<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch0/>Chapter 0: A Primer on “Structured” Linalg Operations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch1/>Chapter 1: Combining Existing Transformations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch2/>Chapter 2: Adding a Simple New Transformation Operation</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch3/>Chapter 3: More than Simple Transform Operations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch4/>Chapter 4: Matching Payload with Transform Operations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/ChH/>Chapter H: Reproducing Halide Schedule</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Tutorials/UnderstandingTheIRStructure/>Understanding the IR Structure</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/MlirOpt/>Using `mlir-opt`</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/DataFlowAnalysis/>Writing DataFlow Analyses in MLIR</a></li></ul></li></ul></li></ul></nav><div class=sidebar-footer></div></div></div><a href=# id=backtothetop-fixed class=backtothetop data-backtothetop-duration=600 data-backtothetop-easing=easeOutQuart data-backtothetop-fixed-fadein=1000 data-backtothetop-fixed-fadeout=1000 data-backtothetop-fixed-bottom=10 data-backtothetop-fixed-right=20><span class="fa-layers fa-fw"><i class="fas fa-circle"></i> <i class="fas fa-arrow-circle-up"></i></span></a></div></body></html>

Pages: 1 2 3 4 5 6 7 8 9 10