CINXE.COM

Chapter 2: Emitting Basic MLIR - MLIR

<!doctype html><html lang=en-us><head><meta charset=utf-8><meta http-equiv=x-ua-compatible content="IE=edge"><meta name=viewport content="width=device-width,initial-scale=1,maximum-scale=1,user-scalable=no"><title>Chapter 2: Emitting Basic MLIR - MLIR</title><meta name=description content="Multi-Level IR Compiler Framework"><meta name=generator content="Hugo 0.119.0"><link href=https://mlir.llvm.org/index.xml rel=alternate type=application/rss+xml><link rel=canonical href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-2/><link rel=stylesheet href=https://mlir.llvm.org/css/theme.css><script src=https://use.fontawesome.com/releases/v5.0.6/js/all.js></script> <link rel=stylesheet href=https://mlir.llvm.org/css/chroma.min.css><script src=https://cdn.jsdelivr.net/npm/jquery@3.3.1/dist/jquery.min.js></script> <script src=https://cdn.jsdelivr.net/npm/jquery.easing@1.4.1/jquery.easing.min.js></script> <script src=https://mlir.llvm.org/js/bundle.js></script> <script type=text/javascript src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> <script type=text/x-mathjax-config> MathJax.Hub.Config({ tex2jax: { inlineMath: [['$', '$'] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ] } }); </script><link rel=apple-touch-icon sizes=180x180 href="/apple-touch-icon.png?v=1"><link rel=icon type=image/png sizes=32x32 href="/favicon-32x32.png?v=1"><link rel=icon type=image/png sizes=16x16 href="/favicon-16x16.png?v=1"><link rel=manifest href="/site.webmanifest?v=1"><link rel=mask-icon href="/safari-pinned-tab.svg?v=1" color=#3775e0><link rel="shortcut icon" href="/favicon.ico?v=1"><meta name=msapplication-TileColor content="#2d89ef"><meta name=theme-color content="#ffffff"><link rel=icon href=/favicon.svg type=image/svg+xml sizes=any><style>:root{}</style></head><body><div class=container><header><h1><div><img src=https://mlir.llvm.org//mlir-logo.png width=40px align=absmiddle> MLIR</div></h1><p class=description>Multi-Level IR Compiler Framework</p></header><div class=global-menu><nav><ul><li class=parent><a href>Community<i class="fas fa-angle-right"></i></a><ul class=sub-menu><li class=child><a href=https://llvm.discourse.group/c/mlir/31>Forums</a></li><li class=child><a href=https://discord.gg/xS7Z362>Chat</a></li></ul></li><li><a href=/getting_started/Debugging/>Debugging Tips</a></li><li><a href=/getting_started/Faq/>FAQ</a></li><li class=parent><a href=https://github.com/llvm/llvm-project/tree/main/mlir>Source<i class="fas fa-angle-right"></i></a><ul class=sub-menu><li class=child><a href=/doxygen/>Doxygen</a></li><li class=child><a href=https://github.com/llvm/llvm-project/tree/main/mlir>GitHub</a></li></ul></li><li><a href="https://bugs.llvm.org/buglist.cgi?bug_status=__open__&amp;list_id=177877&amp;order=changeddate%20DESC%2Cpriority%2Cbug_severity&amp;product=MLIR&amp;query_format=specific">Bugs</a></li><li><a href=https://github.com/llvm/mlir-www/tree/main/website/static/LogoAssets>Logo Assets</a></li><li><a href=https://www.youtube.com/MLIRCompiler>Youtube Channel</a></li></ul></nav></div><div class=content-container><main><h1>Chapter 2: Emitting Basic MLIR</h1><p><nav id=TableOfContents><ul><li><a href=#introduction-multi-level-intermediate-representation>Introduction: Multi-Level Intermediate Representation</a></li><li><a href=#interfacing-with-mlir>Interfacing with MLIR</a><ul><li><a href=#opaque-api>Opaque API</a></li></ul></li><li><a href=#defining-a-toy-dialect>Defining a Toy Dialect</a></li><li><a href=#defining-toy-operations>Defining Toy Operations</a><ul><li><a href=#op-vs-operation-using-mlir-operations>Op vs Operation: Using MLIR Operations</a></li><li><a href=#using-the-operation-definition-specification-ods-framework>Using the Operation Definition Specification (ODS) Framework</a></li></ul></li><li><a href=#complete-toy-example>Complete Toy Example</a></li></ul></nav><p>Now that we&rsquo;re familiar with our language and the AST, let&rsquo;s see how MLIR can help to compile Toy.</p><h2 id=introduction-multi-level-intermediate-representation>Introduction: Multi-Level Intermediate Representation&nbsp;<a class=headline-hash href=#introduction-multi-level-intermediate-representation>¶</a></h2><p>Other compilers, like LLVM (see the <a href=https://llvm.org/docs/tutorial/MyFirstLanguageFrontend/index.html>Kaleidoscope tutorial</a>), offer a fixed set of predefined types and (usually <em>low-level</em> / RISC-like) instructions. It is up to the frontend for a given language to perform any language-specific type-checking, analysis, or transformation before emitting LLVM IR. For example, Clang will use its AST to perform not only static analysis but also transformations, such as C++ template instantiation through AST cloning and rewrite. Finally, languages with construction at a higher-level than C/C++ may require non-trivial lowering from their AST to generate LLVM IR.</p><p>As a consequence, multiple frontends end up reimplementing significant pieces of infrastructure to support the need for these analyses and transformation. MLIR addresses this issue by being designed for extensibility. As such, there are few pre-defined instructions (<em>operations</em> in MLIR terminology) or types.</p><h2 id=interfacing-with-mlir>Interfacing with MLIR&nbsp;<a class=headline-hash href=#interfacing-with-mlir>¶</a></h2><p><a href=/docs/LangRef/>Language Reference</a></p><p>MLIR is designed to be a completely extensible infrastructure; there is no closed set of attributes (think: constant metadata), operations, or types. MLIR supports this extensibility with the concept of <a href=/docs/LangRef/#dialects>Dialects</a>. Dialects provide a grouping mechanism for abstraction under a unique <code>namespace</code>.</p><p>In MLIR, <a href=/docs/LangRef/#operations><code>Operations</code></a> are the core unit of abstraction and computation, similar in many ways to LLVM instructions. Operations can have application-specific semantics and can be used to represent all of the core IR structures in LLVM: instructions, globals (like functions), modules, etc.</p><p>Here is the MLIR assembly for the Toy <code>transpose</code> operations:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%t_tensor</span> <span class=p>=</span> <span class=s>&#34;toy.transpose&#34;</span><span class=p>(</span><span class=nv>%tensor</span><span class=p>)</span> <span class=p>{</span><span class=nl>inplace =</span> true<span class=p>}</span> <span class=p>:</span> <span class=p>(</span><span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>3x2x</span><span class=k>f64</span><span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;example/file/path&#34;</span><span class=p>:</span><span class=m>12</span><span class=p>:</span><span class=m>1</span><span class=p>)</span> </span></span></code></pre></div><p>Let&rsquo;s break down the anatomy of this MLIR operation:</p><ul><li><p><code>%t_tensor</code></p><ul><li>The name given to the result defined by this operation (which includes <a href=/docs/LangRef/#identifiers-and-keywords>a prefixed sigil to avoid collisions</a>). An operation may define zero or more results (in the context of Toy, we will limit ourselves to single-result operations), which are SSA values. The name is used during parsing but is not persistent (e.g., it is not tracked in the in-memory representation of the SSA value).</li></ul></li><li><p><code>"toy.transpose"</code></p><ul><li>The name of the operation. It is expected to be a unique string, with the namespace of the dialect prefixed before the &ldquo;<code>.</code>&rdquo;. This can be read as the <code>transpose</code> operation in the <code>toy</code> dialect.</li></ul></li><li><p><code>(%tensor)</code></p><ul><li>A list of zero or more input operands (or arguments), which are SSA values defined by other operations or referring to block arguments.</li></ul></li><li><p><code>{ inplace = true }</code></p><ul><li>A dictionary of zero or more attributes, which are special operands that are always constant. Here we define a boolean attribute named &lsquo;inplace&rsquo; that has a constant value of true.</li></ul></li><li><p><code>(tensor&lt;2x3xf64>) -> tensor&lt;3x2xf64></code></p><ul><li>This refers to the type of the operation in a functional form, spelling the types of the arguments in parentheses and the type of the return values afterward.</li></ul></li><li><p><code>loc("example/file/path":12:1)</code></p><ul><li>This is the location in the source code from which this operation originated.</li></ul></li></ul><p>Shown here is the general form of an operation. As described above, the set of operations in MLIR is extensible. Operations are modeled using a small set of concepts, enabling operations to be reasoned about and manipulated generically. These concepts are:</p><ul><li>A name for the operation.</li><li>A list of SSA operand values.</li><li>A list of <a href=/docs/LangRef/#attributes>attributes</a>.</li><li>A list of <a href=/docs/LangRef/#type-system>types</a> for result values.</li><li>A <a href=/docs/Diagnostics/#source-locations>source location</a> for debugging purposes.</li><li>A list of successors <a href=/docs/LangRef/#blocks>blocks</a> (for branches, mostly).</li><li>A list of <a href=/docs/LangRef/#regions>regions</a> (for structural operations like functions).</li></ul><p>In MLIR, every operation has a mandatory source location associated with it. Contrary to LLVM, where debug info locations are metadata and can be dropped, in MLIR, the location is a core requirement, and APIs depend on and manipulate it. Dropping a location is thus an explicit choice which cannot happen by mistake.</p><p>To provide an illustration: If a transformation replaces an operation by another, that new operation must still have a location attached. This makes it possible to track where that operation came from.</p><p>It&rsquo;s worth noting that the mlir-opt tool - a tool for testing compiler passes - does not include locations in the output by default. The <code>-mlir-print-debuginfo</code> flag specifies to include locations. (Run <code>mlir-opt --help</code> for more options.)</p><h3 id=opaque-api>Opaque API&nbsp;<a class=headline-hash href=#opaque-api>¶</a></h3><p>MLIR is designed to allow all IR elements, such as attributes, operations, and types, to be customized. At the same time, IR elements can always be reduced to the above fundamental concepts. This allows MLIR to parse, represent, and <a href=/getting_started/Glossary/#round-trip>round-trip</a> IR for <em>any</em> operation. For example, we could place our Toy operation from above into an <code>.mlir</code> file and round-trip through <em>mlir-opt</em> without registering any <code>toy</code> related dialect:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=kt>func</span><span class=p>.</span><span class=kt>func</span> <span class=nf>@toy_func</span><span class=p>(</span><span class=nv>%tensor</span><span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>3x2x</span><span class=k>f64</span><span class=p>&gt;</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nv>%t_tensor</span> <span class=p>=</span> <span class=s>&#34;toy.transpose&#34;</span><span class=p>(</span><span class=nv>%tensor</span><span class=p>)</span> <span class=p>{</span> <span class=nl>inplace =</span> true <span class=p>}</span> <span class=p>:</span> <span class=p>(</span><span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>3x2x</span><span class=k>f64</span><span class=p>&gt;</span> </span></span><span class=line><span class=cl> <span class=kt>return</span> <span class=nv>%t_tensor</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>3x2x</span><span class=k>f64</span><span class=p>&gt;</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>In the cases of unregistered attributes, operations, and types, MLIR will enforce some structural constraints (e.g. dominance, etc.), but otherwise they are completely opaque. For instance, MLIR has little information about whether an unregistered operation can operate on particular data types, how many operands it can take, or how many results it produces. This flexibility can be useful for bootstrapping purposes, but it is generally advised against in mature systems. Unregistered operations must be treated conservatively by transformations and analyses, and they are much harder to construct and manipulate.</p><p>This handling can be observed by crafting what should be an invalid IR for Toy and seeing it round-trip without tripping the verifier:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=kt>func</span><span class=p>.</span><span class=kt>func</span> <span class=nf>@main</span><span class=p>()</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nv>%0</span> <span class=p>=</span> <span class=s>&#34;toy.print&#34;</span><span class=p>()</span> <span class=p>:</span> <span class=p>()</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>There are multiple problems here: the <code>toy.print</code> operation is not a terminator; it should take an operand; and it shouldn&rsquo;t return any values. In the next section, we will register our dialect and operations with MLIR, plug into the verifier, and add nicer APIs to manipulate our operations.</p><h2 id=defining-a-toy-dialect>Defining a Toy Dialect&nbsp;<a class=headline-hash href=#defining-a-toy-dialect>¶</a></h2><p>To effectively interface with MLIR, we will define a new Toy dialect. This dialect will model the structure of the Toy language, as well as provide an easy avenue for high-level analysis and transformation.</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=c1>/// This is the definition of the Toy dialect. A dialect inherits from </span></span></span><span class=line><span class=cl><span class=c1>/// mlir::Dialect and registers custom attributes, operations, and types. It can </span></span></span><span class=line><span class=cl><span class=c1>/// also override virtual methods to change some general behavior, which will be </span></span></span><span class=line><span class=cl><span class=c1>/// demonstrated in later chapters of the tutorial. </span></span></span><span class=line><span class=cl><span class=c1></span><span class=k>class</span> <span class=nc>ToyDialect</span> <span class=o>:</span> <span class=k>public</span> <span class=n>mlir</span><span class=o>::</span><span class=n>Dialect</span> <span class=p>{</span> </span></span><span class=line><span class=cl><span class=k>public</span><span class=o>:</span> </span></span><span class=line><span class=cl> <span class=k>explicit</span> <span class=n>ToyDialect</span><span class=p>(</span><span class=n>mlir</span><span class=o>::</span><span class=n>MLIRContext</span> <span class=o>*</span><span class=n>ctx</span><span class=p>);</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c1>/// Provide a utility accessor to the dialect namespace. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=k>static</span> <span class=n>llvm</span><span class=o>::</span><span class=n>StringRef</span> <span class=n>getDialectNamespace</span><span class=p>()</span> <span class=p>{</span> <span class=k>return</span> <span class=s>&#34;toy&#34;</span><span class=p>;</span> <span class=p>}</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c1>/// An initializer called from the constructor of ToyDialect that is used to </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// register attributes, operations, types, and more within the Toy dialect. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=kt>void</span> <span class=nf>initialize</span><span class=p>();</span> </span></span><span class=line><span class=cl><span class=p>};</span> </span></span></code></pre></div><p>This is the C++ definition of a dialect, but MLIR also supports defining dialects declaratively via <a href=https://llvm.org/docs/TableGen/ProgRef.html>tablegen</a>. Using the declarative specification is much cleaner as it removes the need for a large portion of the boilerplate when defining a new dialect. It also enables easy generation of dialect documentation, which can be described directly alongside the dialect. In this declarative format, the toy dialect would be specified as:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-tablegen data-lang=tablegen><span class=line><span class=cl><span class=c>// Provide a definition of the &#39;toy&#39; dialect in the ODS framework so that we </span></span></span><span class=line><span class=cl><span class=c>// can define our operations. </span></span></span><span class=line><span class=cl><span class=c></span><span class=k>def</span> <span class=nv>Toy_Dialect</span> <span class=p>:</span> <span class=nv>Dialect</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=c>// The namespace of our dialect, this corresponds 1-1 with the string we </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// provided in `ToyDialect::getDialectNamespace`. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>name</span> <span class=p>=</span> <span class=s>&#34;toy&#34;</span><span class=p>;</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c>// A short one-line summary of our dialect. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>summary</span> <span class=p>=</span> <span class=s>&#34;A high-level dialect for analyzing and optimizing the &#34;</span> </span></span><span class=line><span class=cl> <span class=s>&#34;Toy language&#34;</span><span class=p>;</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c>// A much longer description of our dialect. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>description</span> <span class=p>=</span> <span class=s>[{ </span></span></span><span class=line><span class=cl><span class=s> The Toy language is a tensor-based language that allows you to define </span></span></span><span class=line><span class=cl><span class=s> functions, perform some math computation, and print results. This dialect </span></span></span><span class=line><span class=cl><span class=s> provides a representation of the language that is amenable to analysis and </span></span></span><span class=line><span class=cl><span class=s> optimization. </span></span></span><span class=line><span class=cl><span class=s> }]</span><span class=p>;</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c>// The C++ namespace that the dialect class definition resides in. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>cppNamespace</span> <span class=p>=</span> <span class=s>&#34;toy&#34;</span><span class=p>;</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>To see what this generates, we can run the <code>mlir-tblgen</code> command with the <code>gen-dialect-decls</code> action like so:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-shell data-lang=shell><span class=line><span class=cl><span class=si>${</span><span class=nv>build_root</span><span class=si>}</span>/bin/mlir-tblgen -gen-dialect-decls <span class=si>${</span><span class=nv>mlir_src_root</span><span class=si>}</span>/examples/toy/Ch2/include/toy/Ops.td -I <span class=si>${</span><span class=nv>mlir_src_root</span><span class=si>}</span>/include/ </span></span></code></pre></div><p>After the dialect has been defined, it can now be loaded into an MLIRContext:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl> <span class=n>context</span><span class=p>.</span><span class=n>loadDialect</span><span class=o>&lt;</span><span class=n>ToyDialect</span><span class=o>&gt;</span><span class=p>();</span> </span></span></code></pre></div><p>By default, an <code>MLIRContext</code> only loads the <a href=/docs/Dialects/Builtin/>Builtin Dialect</a>, which provides a few core IR components, meaning that other dialects, such as our <code>Toy</code> dialect, must be explicitly loaded.</p><h2 id=defining-toy-operations>Defining Toy Operations&nbsp;<a class=headline-hash href=#defining-toy-operations>¶</a></h2><p>Now that we have a <code>Toy</code> dialect, we can start defining the operations. This will allow for providing semantic information that the rest of the system can hook into. As an example, let&rsquo;s walk through the creation of a <code>toy.constant</code> operation. This operation will represent a constant value in the Toy language.</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl> <span class=nv>%4</span> <span class=p>=</span> <span class=s>&#34;toy.constant&#34;</span><span class=p>()</span> <span class=p>{</span><span class=nl>value =</span> dense<span class=p>&lt;</span><span class=m>1.0</span><span class=p>&gt;</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;}</span> <span class=p>:</span> <span class=p>()</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;</span> </span></span></code></pre></div><p>This operation takes zero operands, a <a href=/docs/Dialects/Builtin/#denseintorfpelementsattr>dense elements</a> attribute named <code>value</code> to represent the constant value, and returns a single result of <a href=/docs/Dialects/Builtin/#rankedtensortype>RankedTensorType</a>. An operation class inherits from the <a href=https://en.wikipedia.org/wiki/Curiously_recurring_template_pattern>CRTP</a> <code>mlir::Op</code> class which also takes some optional <a href=/docs/Traits/><em>traits</em></a> to customize its behavior. <code>Traits</code> are a mechanism with which we can inject additional behavior into an Operation, such as additional accessors, verification, and more. Let&rsquo;s look below at a possible definition for the constant operation that we have described above:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=k>class</span> <span class=nc>ConstantOp</span> <span class=o>:</span> <span class=k>public</span> <span class=n>mlir</span><span class=o>::</span><span class=n>Op</span><span class=o>&lt;</span> </span></span><span class=line><span class=cl> <span class=c1>/// `mlir::Op` is a CRTP class, meaning that we provide the </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// derived class as a template parameter. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=n>ConstantOp</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=c1>/// The ConstantOp takes zero input operands. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=n>mlir</span><span class=o>::</span><span class=n>OpTrait</span><span class=o>::</span><span class=n>ZeroOperands</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=c1>/// The ConstantOp returns a single result. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=n>mlir</span><span class=o>::</span><span class=n>OpTrait</span><span class=o>::</span><span class=n>OneResult</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=c1>/// We also provide a utility `getType` accessor that </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// returns the TensorType of the single result. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=n>mlir</span><span class=o>::</span><span class=n>OpTraits</span><span class=o>::</span><span class=n>OneTypedResult</span><span class=o>&lt;</span><span class=n>TensorType</span><span class=o>&gt;::</span><span class=n>Impl</span><span class=o>&gt;</span> <span class=p>{</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=k>public</span><span class=o>:</span> </span></span><span class=line><span class=cl> <span class=c1>/// Inherit the constructors from the base Op class. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=k>using</span> <span class=n>Op</span><span class=o>::</span><span class=n>Op</span><span class=p>;</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c1>/// Provide the unique name for this operation. MLIR will use this to register </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// the operation and uniquely identify it throughout the system. The name </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// provided here must be prefixed by the parent dialect namespace followed </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// by a `.`. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=k>static</span> <span class=n>llvm</span><span class=o>::</span><span class=n>StringRef</span> <span class=n>getOperationName</span><span class=p>()</span> <span class=p>{</span> <span class=k>return</span> <span class=s>&#34;toy.constant&#34;</span><span class=p>;</span> <span class=p>}</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c1>/// Return the value of the constant by fetching it from the attribute. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=n>mlir</span><span class=o>::</span><span class=n>DenseElementsAttr</span> <span class=n>getValue</span><span class=p>();</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c1>/// Operations may provide additional verification beyond what the attached </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// traits provide. Here we will ensure that the specific invariants of the </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// constant operation are upheld, for example the result type must be </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// of TensorType and matches the type of the constant `value`. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=n>LogicalResult</span> <span class=nf>verifyInvariants</span><span class=p>();</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c1>/// Provide an interface to build this operation from a set of input values. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// This interface is used by the `builder` classes to allow for easily </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// generating instances of this operation: </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// mlir::OpBuilder::create&lt;ConstantOp&gt;(...) </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// This method populates the given `state` that MLIR uses to create </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// operations. This state is a collection of all of the discrete elements </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// that an operation may contain. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>/// Build a constant with the given return type and `value` attribute. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=k>static</span> <span class=kt>void</span> <span class=nf>build</span><span class=p>(</span><span class=n>mlir</span><span class=o>::</span><span class=n>OpBuilder</span> <span class=o>&amp;</span><span class=n>builder</span><span class=p>,</span> <span class=n>mlir</span><span class=o>::</span><span class=n>OperationState</span> <span class=o>&amp;</span><span class=n>state</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=n>mlir</span><span class=o>::</span><span class=n>Type</span> <span class=n>result</span><span class=p>,</span> <span class=n>mlir</span><span class=o>::</span><span class=n>DenseElementsAttr</span> <span class=n>value</span><span class=p>);</span> </span></span><span class=line><span class=cl> <span class=c1>/// Build a constant and reuse the type from the given &#39;value&#39;. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=k>static</span> <span class=kt>void</span> <span class=nf>build</span><span class=p>(</span><span class=n>mlir</span><span class=o>::</span><span class=n>OpBuilder</span> <span class=o>&amp;</span><span class=n>builder</span><span class=p>,</span> <span class=n>mlir</span><span class=o>::</span><span class=n>OperationState</span> <span class=o>&amp;</span><span class=n>state</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=n>mlir</span><span class=o>::</span><span class=n>DenseElementsAttr</span> <span class=n>value</span><span class=p>);</span> </span></span><span class=line><span class=cl> <span class=c1>/// Build a constant by broadcasting the given &#39;value&#39;. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=k>static</span> <span class=kt>void</span> <span class=nf>build</span><span class=p>(</span><span class=n>mlir</span><span class=o>::</span><span class=n>OpBuilder</span> <span class=o>&amp;</span><span class=n>builder</span><span class=p>,</span> <span class=n>mlir</span><span class=o>::</span><span class=n>OperationState</span> <span class=o>&amp;</span><span class=n>state</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=kt>double</span> <span class=n>value</span><span class=p>);</span> </span></span><span class=line><span class=cl><span class=p>};</span> </span></span></code></pre></div><p>and we can register this operation in the <code>ToyDialect</code> initializer:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=kt>void</span> <span class=n>ToyDialect</span><span class=o>::</span><span class=n>initialize</span><span class=p>()</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=n>addOperations</span><span class=o>&lt;</span><span class=n>ConstantOp</span><span class=o>&gt;</span><span class=p>();</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><h3 id=op-vs-operation-using-mlir-operations>Op vs Operation: Using MLIR Operations&nbsp;<a class=headline-hash href=#op-vs-operation-using-mlir-operations>¶</a></h3><p>Now that we have defined an operation, we will want to access and transform it. In MLIR, there are two main classes related to operations: <code>Operation</code> and <code>Op</code>. The <code>Operation</code> class is used to generically model all operations. It is &lsquo;opaque&rsquo;, in the sense that it does not describe the properties of particular operations or types of operations. Instead, the <code>Operation</code> class provides a general API into an operation instance. On the other hand, each specific type of operation is represented by an <code>Op</code> derived class. For instance <code>ConstantOp</code> represents a operation with zero inputs, and one output, which is always set to the same value. <code>Op</code> derived classes act as smart pointer wrapper around a <code>Operation*</code>, provide operation-specific accessor methods, and type-safe properties of operations. This means that when we define our Toy operations, we are simply defining a clean, semantically useful interface for building and interfacing with the <code>Operation</code> class. This is why our <code>ConstantOp</code> defines no class fields; all of the data for this operation is stored in the referenced <code>Operation</code>. A side effect of this design is that we always pass around <code>Op</code> derived classes &ldquo;by-value&rdquo;, instead of by reference or pointer (<em>passing by value</em> is a common idiom in MLIR and applies similarly to attributes, types, etc). Given a generic <code>Operation*</code> instance, we can always get a specific <code>Op</code> instance using LLVM&rsquo;s casting infrastructure:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=kt>void</span> <span class=nf>processConstantOp</span><span class=p>(</span><span class=n>mlir</span><span class=o>::</span><span class=n>Operation</span> <span class=o>*</span><span class=n>operation</span><span class=p>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=n>ConstantOp</span> <span class=n>op</span> <span class=o>=</span> <span class=n>llvm</span><span class=o>::</span><span class=n>dyn_cast</span><span class=o>&lt;</span><span class=n>ConstantOp</span><span class=o>&gt;</span><span class=p>(</span><span class=n>operation</span><span class=p>);</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c1>// This operation is not an instance of `ConstantOp`. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=k>if</span> <span class=p>(</span><span class=o>!</span><span class=n>op</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=k>return</span><span class=p>;</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c1>// Get the internal operation instance wrapped by the smart pointer. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=n>mlir</span><span class=o>::</span><span class=n>Operation</span> <span class=o>*</span><span class=n>internalOperation</span> <span class=o>=</span> <span class=n>op</span><span class=p>.</span><span class=n>getOperation</span><span class=p>();</span> </span></span><span class=line><span class=cl> <span class=n>assert</span><span class=p>(</span><span class=n>internalOperation</span> <span class=o>==</span> <span class=n>operation</span> <span class=o>&amp;&amp;</span> </span></span><span class=line><span class=cl> <span class=s>&#34;these operation instances are the same&#34;</span><span class=p>);</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><h3 id=using-the-operation-definition-specification-ods-framework>Using the Operation Definition Specification (ODS) Framework&nbsp;<a class=headline-hash href=#using-the-operation-definition-specification-ods-framework>¶</a></h3><p>In addition to specializing the <code>mlir::Op</code> C++ template, MLIR also supports defining operations in a declarative manner. This is achieved via the <a href=/docs/DefiningDialects/Operations/>Operation Definition Specification</a> framework. Facts regarding an operation are specified concisely into a TableGen record, which will be expanded into an equivalent <code>mlir::Op</code> C++ template specialization at compile time. Using the ODS framework is the desired way for defining operations in MLIR given the simplicity, conciseness, and general stability in the face of C++ API changes.</p><p>Lets see how to define the ODS equivalent of our ConstantOp:</p><p>Operations in ODS are defined by inheriting from the <code>Op</code> class. To simplify our operation definitions, we will define a base class for operations in the Toy dialect.</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-tablegen data-lang=tablegen><span class=line><span class=cl><span class=c>// Base class for toy dialect operations. This operation inherits from the base </span></span></span><span class=line><span class=cl><span class=c>// `Op` class in OpBase.td, and provides: </span></span></span><span class=line><span class=cl><span class=c>// * The parent dialect of the operation. </span></span></span><span class=line><span class=cl><span class=c>// * The mnemonic for the operation, or the name without the dialect prefix. </span></span></span><span class=line><span class=cl><span class=c>// * A list of traits for the operation. </span></span></span><span class=line><span class=cl><span class=c></span><span class=k>class</span> <span class=nv>Toy_Op</span><span class=p>&lt;</span><span class=k>string</span> <span class=nv>mnemonic</span><span class=p>,</span> <span class=k>list</span><span class=p>&lt;</span><span class=nv>Trait</span><span class=p>&gt;</span> <span class=nv>traits</span> <span class=p>=</span> <span class=p>[]&gt;</span> <span class=p>:</span> </span></span><span class=line><span class=cl> <span class=nv>Op</span><span class=p>&lt;</span><span class=nv>Toy_Dialect</span><span class=p>,</span> <span class=nv>mnemonic</span><span class=p>,</span> <span class=nv>traits</span><span class=p>&gt;;</span> </span></span></code></pre></div><p>With all of the preliminary pieces defined, we can begin to define the constant operation.</p><p>We define a toy operation by inheriting from our base &lsquo;Toy_Op&rsquo; class above. Here we provide the mnemonic and a list of traits for the operation. The <a href=/docs/DefiningDialects/Operations/#operation-name>mnemonic</a> here matches the one given in <code>ConstantOp::getOperationName</code> without the dialect prefix; <code>toy.</code>. Missing here from our C++ definition are the <code>ZeroOperands</code> and <code>OneResult</code> traits; these will be automatically inferred based upon the <code>arguments</code> and <code>results</code> fields we define later.</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-tablegen data-lang=tablegen><span class=line><span class=cl><span class=k>def</span> <span class=nv>ConstantOp</span> <span class=p>:</span> <span class=nv>Toy_Op</span><span class=p>&lt;</span><span class=s>&#34;constant&#34;</span><span class=p>&gt;</span> <span class=p>{</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>At this point you probably might want to know what the C++ code generated by TableGen looks like. Simply run the <code>mlir-tblgen</code> command with the <code>gen-op-decls</code> or the <code>gen-op-defs</code> action like so:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-shell data-lang=shell><span class=line><span class=cl><span class=si>${</span><span class=nv>build_root</span><span class=si>}</span>/bin/mlir-tblgen -gen-op-defs <span class=si>${</span><span class=nv>mlir_src_root</span><span class=si>}</span>/examples/toy/Ch2/include/toy/Ops.td -I <span class=si>${</span><span class=nv>mlir_src_root</span><span class=si>}</span>/include/ </span></span></code></pre></div><p>Depending on the selected action, this will print either the <code>ConstantOp</code> class declaration or its implementation. Comparing this output to the hand-crafted implementation is incredibly useful when getting started with TableGen.</p><h4 id=defining-arguments-and-results>Defining Arguments and Results&nbsp;<a class=headline-hash href=#defining-arguments-and-results>¶</a></h4><p>With the shell of the operation defined, we can now provide the <a href=/docs/DefiningDialects/Operations/#operation-arguments>inputs</a> and <a href=/docs/DefiningDialects/Operations/#operation-results>outputs</a> to our operation. The inputs, or arguments, to an operation may be attributes or types for SSA operand values. The results correspond to a set of types for the values produced by the operation:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-tablegen data-lang=tablegen><span class=line><span class=cl><span class=k>def</span> <span class=nv>ConstantOp</span> <span class=p>:</span> <span class=nv>Toy_Op</span><span class=p>&lt;</span><span class=s>&#34;constant&#34;</span><span class=p>&gt;</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=c>// The constant operation takes an attribute as the only input. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// `F64ElementsAttr` corresponds to a 64-bit floating-point ElementsAttr. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>arguments</span> <span class=p>=</span> <span class=p>(</span><span class=nv>ins</span> <span class=nv>F64ElementsAttr</span><span class=p>:</span><span class=nv>$value</span><span class=p>);</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c>// The constant operation returns a single value of TensorType. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// F64Tensor corresponds to a 64-bit floating-point TensorType. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>results</span> <span class=p>=</span> <span class=p>(</span><span class=nv>outs</span> <span class=nv>F64Tensor</span><span class=p>);</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>By providing a name to the arguments or results, e.g. <code>$value</code>, ODS will automatically generate a matching accessor: <code>DenseElementsAttr ConstantOp::value()</code>.</p><h4 id=adding-documentation>Adding Documentation&nbsp;<a class=headline-hash href=#adding-documentation>¶</a></h4><p>The next step after defining the operation is to document it. Operations may provide <a href=/docs/DefiningDialects/Operations/#operation-documentation><code>summary</code> and <code>description</code></a> fields to describe the semantics of the operation. This information is useful for users of the dialect and can even be used to auto-generate Markdown documents.</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-tablegen data-lang=tablegen><span class=line><span class=cl><span class=k>def</span> <span class=nv>ConstantOp</span> <span class=p>:</span> <span class=nv>Toy_Op</span><span class=p>&lt;</span><span class=s>&#34;constant&#34;</span><span class=p>&gt;</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=c>// Provide a summary and description for this operation. This can be used to </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// auto-generate documentation of the operations within our dialect. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>summary</span> <span class=p>=</span> <span class=s>&#34;constant operation&#34;</span><span class=p>;</span> </span></span><span class=line><span class=cl> <span class=k>let</span> <span class=nv>description</span> <span class=p>=</span> <span class=s>[{ </span></span></span><span class=line><span class=cl><span class=s> Constant operation turns a literal into an SSA value. The data is attached </span></span></span><span class=line><span class=cl><span class=s> to the operation as an attribute. For example: </span></span></span><span class=line><span class=cl><span class=s> </span></span></span><span class=line><span class=cl><span class=s> %0 = &#34;toy.constant&#34;() </span></span></span><span class=line><span class=cl><span class=s> { value = dense&lt;[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]&gt; : tensor&lt;2x3xf64&gt; } </span></span></span><span class=line><span class=cl><span class=s> : () -&gt; tensor&lt;2x3xf64&gt; </span></span></span><span class=line><span class=cl><span class=s> }]</span><span class=p>;</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c>// The constant operation takes an attribute as the only input. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// `F64ElementsAttr` corresponds to a 64-bit floating-point ElementsAttr. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>arguments</span> <span class=p>=</span> <span class=p>(</span><span class=nv>ins</span> <span class=nv>F64ElementsAttr</span><span class=p>:</span><span class=nv>$value</span><span class=p>);</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c>// The generic call operation returns a single value of TensorType. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// F64Tensor corresponds to a 64-bit floating-point TensorType. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>results</span> <span class=p>=</span> <span class=p>(</span><span class=nv>outs</span> <span class=nv>F64Tensor</span><span class=p>);</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><h4 id=verifying-operation-semantics>Verifying Operation Semantics&nbsp;<a class=headline-hash href=#verifying-operation-semantics>¶</a></h4><p>At this point we&rsquo;ve already covered a majority of the original C++ operation definition. The next piece to define is the verifier. Luckily, much like the named accessor, the ODS framework will automatically generate a lot of the necessary verification logic based upon the constraints we have given. This means that we don&rsquo;t need to verify the structure of the return type, or even the input attribute <code>value</code>. In many cases, additional verification is not even necessary for ODS operations. To add additional verification logic, an operation can override the <a href=/docs/DefiningDialects/Operations/#custom-verifier-code><code>verifier</code></a> field. The <code>verifier</code> field allows for defining a C++ code blob that will be run as part of <code>ConstantOp::verify</code>. This blob can assume that all of the other invariants of the operation have already been verified:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-tablegen data-lang=tablegen><span class=line><span class=cl><span class=k>def</span> <span class=nv>ConstantOp</span> <span class=p>:</span> <span class=nv>Toy_Op</span><span class=p>&lt;</span><span class=s>&#34;constant&#34;</span><span class=p>&gt;</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=c>// Provide a summary and description for this operation. This can be used to </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// auto-generate documentation of the operations within our dialect. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>summary</span> <span class=p>=</span> <span class=s>&#34;constant operation&#34;</span><span class=p>;</span> </span></span><span class=line><span class=cl> <span class=k>let</span> <span class=nv>description</span> <span class=p>=</span> <span class=s>[{ </span></span></span><span class=line><span class=cl><span class=s> Constant operation turns a literal into an SSA value. The data is attached </span></span></span><span class=line><span class=cl><span class=s> to the operation as an attribute. For example: </span></span></span><span class=line><span class=cl><span class=s> </span></span></span><span class=line><span class=cl><span class=s> %0 = &#34;toy.constant&#34;() </span></span></span><span class=line><span class=cl><span class=s> { value = dense&lt;[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]&gt; : tensor&lt;2x3xf64&gt; } </span></span></span><span class=line><span class=cl><span class=s> : () -&gt; tensor&lt;2x3xf64&gt; </span></span></span><span class=line><span class=cl><span class=s> }]</span><span class=p>;</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c>// The constant operation takes an attribute as the only input. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// `F64ElementsAttr` corresponds to a 64-bit floating-point ElementsAttr. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>arguments</span> <span class=p>=</span> <span class=p>(</span><span class=nv>ins</span> <span class=nv>F64ElementsAttr</span><span class=p>:</span><span class=nv>$value</span><span class=p>);</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c>// The generic call operation returns a single value of TensorType. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// F64Tensor corresponds to a 64-bit floating-point TensorType. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>results</span> <span class=p>=</span> <span class=p>(</span><span class=nv>outs</span> <span class=nv>F64Tensor</span><span class=p>);</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c>// Add additional verification logic to the constant operation. Setting this bit </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// to `1` will generate a `::llvm::LogicalResult verify()` declaration on the </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// operation class that is called after ODS constructs have been verified, for </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// example the types of arguments and results. We implement additional verification </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// in the definition of this `verify` method in the C++ source file. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>hasVerifier</span> <span class=p>=</span> <span class=m>1</span><span class=p>;</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><h4 id=attaching-build-methods>Attaching <code>build</code> Methods&nbsp;<a class=headline-hash href=#attaching-build-methods>¶</a></h4><p>The final missing component here from our original C++ example are the <code>build</code> methods. ODS can generate some simple build methods automatically, and in this case it will generate our first build method for us. For the rest, we define the <a href=/docs/DefiningDialects/Operations/#custom-builder-methods><code>builders</code></a> field. This field takes a list of <code>OpBuilder</code> objects that take a string corresponding to a list of C++ parameters, as well as an optional code block that can be used to specify the implementation inline.</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-tablegen data-lang=tablegen><span class=line><span class=cl><span class=k>def</span> <span class=nv>ConstantOp</span> <span class=p>:</span> <span class=nv>Toy_Op</span><span class=p>&lt;</span><span class=s>&#34;constant&#34;</span><span class=p>&gt;</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c>// Add custom build methods for the constant operation. These methods populate </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// the `state` that MLIR uses to create operations, i.e. these are used when </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// using `builder.create&lt;ConstantOp&gt;(...)`. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>builders</span> <span class=p>=</span> <span class=p>[</span> </span></span><span class=line><span class=cl> <span class=c>// Build a constant with a given constant tensor value. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=nv>OpBuilder</span><span class=p>&lt;(</span><span class=nv>ins</span> <span class=s>&#34;DenseElementsAttr&#34;</span><span class=p>:</span><span class=nv>$value</span><span class=p>),</span> <span class=s>[{ </span></span></span><span class=line><span class=cl><span class=s> // Call into an autogenerated `build` method. </span></span></span><span class=line><span class=cl><span class=s> build(builder, result, value.getType(), value); </span></span></span><span class=line><span class=cl><span class=s> }]</span><span class=p>&gt;,</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c>// Build a constant with a given constant floating-point value. This builder </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// creates a declaration for `ConstantOp::build` with the given parameters. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=nv>OpBuilder</span><span class=p>&lt;(</span><span class=nv>ins</span> <span class=s>&#34;double&#34;</span><span class=p>:</span><span class=nv>$value</span><span class=p>)&gt;</span> </span></span><span class=line><span class=cl> <span class=p>];</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><h4 id=specifying-a-custom-assembly-format>Specifying a Custom Assembly Format&nbsp;<a class=headline-hash href=#specifying-a-custom-assembly-format>¶</a></h4><p>At this point we can generate our &ldquo;Toy IR&rdquo;. For example, the following:</p><pre tabindex=0><code class=language-toy data-lang=toy># User defined generic function that operates on unknown shaped arguments. def multiply_transpose(a, b) { return transpose(a) * transpose(b); } def main() { var a&lt;2, 3&gt; = [[1, 2, 3], [4, 5, 6]]; var b&lt;2, 3&gt; = [1, 2, 3, 4, 5, 6]; var c = multiply_transpose(a, b); var d = multiply_transpose(b, a); print(d); } </code></pre><p>Results in the following IR:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>module <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=s>&#34;toy.func&#34;</span><span class=p>()</span> <span class=p>({</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%arg0</span><span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>4</span><span class=p>:</span><span class=m>1</span><span class=p>),</span> <span class=nv>%arg1</span><span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>4</span><span class=p>:</span><span class=m>1</span><span class=p>)):</span> </span></span><span class=line><span class=cl> <span class=nv>%0</span> <span class=p>=</span> <span class=s>&#34;toy.transpose&#34;</span><span class=p>(</span><span class=nv>%arg0</span><span class=p>)</span> <span class=p>:</span> <span class=p>(</span><span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>5</span><span class=p>:</span><span class=m>10</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=nv>%1</span> <span class=p>=</span> <span class=s>&#34;toy.transpose&#34;</span><span class=p>(</span><span class=nv>%arg1</span><span class=p>)</span> <span class=p>:</span> <span class=p>(</span><span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>5</span><span class=p>:</span><span class=m>25</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=nv>%2</span> <span class=p>=</span> <span class=s>&#34;toy.mul&#34;</span><span class=p>(</span><span class=nv>%0</span><span class=p>,</span> <span class=nv>%1</span><span class=p>)</span> <span class=p>:</span> <span class=p>(</span><span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;,</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>5</span><span class=p>:</span><span class=m>25</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=s>&#34;toy.return&#34;</span><span class=p>(</span><span class=nv>%2</span><span class=p>)</span> <span class=p>:</span> <span class=p>(</span><span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=p>()</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>5</span><span class=p>:</span><span class=m>3</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=p>})</span> <span class=p>{</span><span class=nl>sym_name =</span> <span class=s>&#34;multiply_transpose&#34;</span><span class=p>,</span> <span class=nl>type =</span> <span class=p>(</span><span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;,</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;}</span> <span class=p>:</span> <span class=p>()</span> <span class=p>-&gt;</span> <span class=p>()</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>4</span><span class=p>:</span><span class=m>1</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=s>&#34;toy.func&#34;</span><span class=p>()</span> <span class=p>({</span> </span></span><span class=line><span class=cl> <span class=nv>%0</span> <span class=p>=</span> <span class=s>&#34;toy.constant&#34;</span><span class=p>()</span> <span class=p>{</span><span class=nl>value =</span> dense<span class=p>&lt;[[</span><span class=m>1.000000e+00</span><span class=p>,</span> <span class=m>2.000000e+00</span><span class=p>,</span> <span class=m>3.000000e+00</span><span class=p>],</span> <span class=p>[</span><span class=m>4.000000e+00</span><span class=p>,</span> <span class=m>5.000000e+00</span><span class=p>,</span> <span class=m>6.000000e+00</span><span class=p>]]&gt;</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;}</span> <span class=p>:</span> <span class=p>()</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>9</span><span class=p>:</span><span class=m>17</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=nv>%1</span> <span class=p>=</span> <span class=s>&#34;toy.reshape&#34;</span><span class=p>(</span><span class=nv>%0</span><span class=p>)</span> <span class=p>:</span> <span class=p>(</span><span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>9</span><span class=p>:</span><span class=m>3</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=nv>%2</span> <span class=p>=</span> <span class=s>&#34;toy.constant&#34;</span><span class=p>()</span> <span class=p>{</span><span class=nl>value =</span> dense<span class=p>&lt;[</span><span class=m>1.000000e+00</span><span class=p>,</span> <span class=m>2.000000e+00</span><span class=p>,</span> <span class=m>3.000000e+00</span><span class=p>,</span> <span class=m>4.000000e+00</span><span class=p>,</span> <span class=m>5.000000e+00</span><span class=p>,</span> <span class=m>6.000000e+00</span><span class=p>]&gt;</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>6x</span><span class=k>f64</span><span class=p>&gt;}</span> <span class=p>:</span> <span class=p>()</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>6x</span><span class=k>f64</span><span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>10</span><span class=p>:</span><span class=m>17</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=nv>%3</span> <span class=p>=</span> <span class=s>&#34;toy.reshape&#34;</span><span class=p>(</span><span class=nv>%2</span><span class=p>)</span> <span class=p>:</span> <span class=p>(</span><span class=kt>tensor</span><span class=p>&lt;</span><span class=m>6x</span><span class=k>f64</span><span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>10</span><span class=p>:</span><span class=m>3</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=nv>%4</span> <span class=p>=</span> <span class=s>&#34;toy.generic_call&#34;</span><span class=p>(</span><span class=nv>%1</span><span class=p>,</span> <span class=nv>%3</span><span class=p>)</span> <span class=p>{</span><span class=nl>callee =</span> <span class=nf>@multiply_transpose</span><span class=p>}</span> <span class=p>:</span> <span class=p>(</span><span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;,</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>11</span><span class=p>:</span><span class=m>11</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=nv>%5</span> <span class=p>=</span> <span class=s>&#34;toy.generic_call&#34;</span><span class=p>(</span><span class=nv>%3</span><span class=p>,</span> <span class=nv>%1</span><span class=p>)</span> <span class=p>{</span><span class=nl>callee =</span> <span class=nf>@multiply_transpose</span><span class=p>}</span> <span class=p>:</span> <span class=p>(</span><span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;,</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>12</span><span class=p>:</span><span class=m>11</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=s>&#34;toy.print&#34;</span><span class=p>(</span><span class=nv>%5</span><span class=p>)</span> <span class=p>:</span> <span class=p>(</span><span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=p>()</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>13</span><span class=p>:</span><span class=m>3</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=s>&#34;toy.return&#34;</span><span class=p>()</span> <span class=p>:</span> <span class=p>()</span> <span class=p>-&gt;</span> <span class=p>()</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>8</span><span class=p>:</span><span class=m>1</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=p>})</span> <span class=p>{</span><span class=nl>sym_name =</span> <span class=s>&#34;main&#34;</span><span class=p>,</span> <span class=nl>type =</span> <span class=p>()</span> <span class=p>-&gt;</span> <span class=p>()}</span> <span class=p>:</span> <span class=p>()</span> <span class=p>-&gt;</span> <span class=p>()</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>8</span><span class=p>:</span><span class=m>1</span><span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}</span> <span class=kt>loc</span><span class=p>(</span>unknown<span class=p>)</span> </span></span></code></pre></div><p>One thing to notice here is that all of our Toy operations are printed using the generic assembly format. This format is the one shown when breaking down <code>toy.transpose</code> at the beginning of this chapter. MLIR allows for operations to define their own custom assembly format, either <a href=/docs/DefiningDialects/Operations/#declarative-assembly-format>declaratively</a> or imperatively via C++. Defining a custom assembly format allows for tailoring the generated IR into something a bit more readable by removing a lot of the fluff that is required by the generic format. Let&rsquo;s walk through an example of an operation format that we would like to simplify.</p><h5 id=toyprint><code>toy.print</code>&nbsp;<a class=headline-hash href=#toyprint>¶</a></h5><p>The current form of <code>toy.print</code> is a little verbose. There are a lot of additional characters that we would like to strip away. Let&rsquo;s begin by thinking of what a good format of <code>toy.print</code> would be, and see how we can implement it. Looking at the basics of <code>toy.print</code> we get:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>toy<span class=p>.</span>print <span class=nv>%5</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(...)</span> </span></span></code></pre></div><p>Here we have stripped much of the format down to the bare essentials, and it has become much more readable. To provide a custom assembly format, an operation can either override the <code>hasCustomAssemblyFormat</code> field for a C++ format, or the <code>assemblyFormat</code> field for the declarative format. Let&rsquo;s look at the C++ variant first, as this is what the declarative format maps to internally.</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-tablegen data-lang=tablegen><span class=line><span class=cl><span class=c>/// Consider a stripped definition of `toy.print` here. </span></span></span><span class=line><span class=cl><span class=c></span><span class=k>def</span> <span class=nv>PrintOp</span> <span class=p>:</span> <span class=nv>Toy_Op</span><span class=p>&lt;</span><span class=s>&#34;print&#34;</span><span class=p>&gt;</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=k>let</span> <span class=nv>arguments</span> <span class=p>=</span> <span class=p>(</span><span class=nv>ins</span> <span class=nv>F64Tensor</span><span class=p>:</span><span class=nv>$input</span><span class=p>);</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c>// Divert the printer and parser to `parse` and `print` methods on our operation, </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// to be implemented in the .cpp file. More details on these methods is shown below. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>hasCustomAssemblyFormat</span> <span class=p>=</span> <span class=m>1</span><span class=p>;</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>A C++ implementation for the printer and parser is shown below:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=c1>/// The &#39;OpAsmPrinter&#39; class is a stream that will allows for formatting </span></span></span><span class=line><span class=cl><span class=c1>/// strings, attributes, operands, types, etc. </span></span></span><span class=line><span class=cl><span class=c1></span><span class=kt>void</span> <span class=n>PrintOp</span><span class=o>::</span><span class=n>print</span><span class=p>(</span><span class=n>mlir</span><span class=o>::</span><span class=n>OpAsmPrinter</span> <span class=o>&amp;</span><span class=n>printer</span><span class=p>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=n>printer</span> <span class=o>&lt;&lt;</span> <span class=s>&#34;toy.print &#34;</span> <span class=o>&lt;&lt;</span> <span class=n>op</span><span class=p>.</span><span class=n>input</span><span class=p>();</span> </span></span><span class=line><span class=cl> <span class=n>printer</span><span class=p>.</span><span class=n>printOptionalAttrDict</span><span class=p>(</span><span class=n>op</span><span class=p>.</span><span class=n>getAttrs</span><span class=p>());</span> </span></span><span class=line><span class=cl> <span class=n>printer</span> <span class=o>&lt;&lt;</span> <span class=s>&#34; : &#34;</span> <span class=o>&lt;&lt;</span> <span class=n>op</span><span class=p>.</span><span class=n>input</span><span class=p>().</span><span class=n>getType</span><span class=p>();</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c1>/// The &#39;OpAsmParser&#39; class provides a collection of methods for parsing </span></span></span><span class=line><span class=cl><span class=c1>/// various punctuation, as well as attributes, operands, types, etc. Each of </span></span></span><span class=line><span class=cl><span class=c1>/// these methods returns a `ParseResult`. This class is a wrapper around </span></span></span><span class=line><span class=cl><span class=c1>/// `LogicalResult` that can be converted to a boolean `true` value on failure, </span></span></span><span class=line><span class=cl><span class=c1>/// or `false` on success. This allows for easily chaining together a set of </span></span></span><span class=line><span class=cl><span class=c1>/// parser rules. These rules are used to populate an `mlir::OperationState` </span></span></span><span class=line><span class=cl><span class=c1>/// similarly to the `build` methods described above. </span></span></span><span class=line><span class=cl><span class=c1></span><span class=n>mlir</span><span class=o>::</span><span class=n>ParseResult</span> <span class=n>PrintOp</span><span class=o>::</span><span class=n>parse</span><span class=p>(</span><span class=n>mlir</span><span class=o>::</span><span class=n>OpAsmParser</span> <span class=o>&amp;</span><span class=n>parser</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=n>mlir</span><span class=o>::</span><span class=n>OperationState</span> <span class=o>&amp;</span><span class=n>result</span><span class=p>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=c1>// Parse the input operand, the attribute dictionary, and the type of the </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=c1>// input. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=n>mlir</span><span class=o>::</span><span class=n>OpAsmParser</span><span class=o>::</span><span class=n>UnresolvedOperand</span> <span class=n>inputOperand</span><span class=p>;</span> </span></span><span class=line><span class=cl> <span class=n>mlir</span><span class=o>::</span><span class=n>Type</span> <span class=n>inputType</span><span class=p>;</span> </span></span><span class=line><span class=cl> <span class=k>if</span> <span class=p>(</span><span class=n>parser</span><span class=p>.</span><span class=n>parseOperand</span><span class=p>(</span><span class=n>inputOperand</span><span class=p>)</span> <span class=o>||</span> </span></span><span class=line><span class=cl> <span class=n>parser</span><span class=p>.</span><span class=n>parseOptionalAttrDict</span><span class=p>(</span><span class=n>result</span><span class=p>.</span><span class=n>attributes</span><span class=p>)</span> <span class=o>||</span> <span class=n>parser</span><span class=p>.</span><span class=n>parseColon</span><span class=p>()</span> <span class=o>||</span> </span></span><span class=line><span class=cl> <span class=n>parser</span><span class=p>.</span><span class=n>parseType</span><span class=p>(</span><span class=n>inputType</span><span class=p>))</span> </span></span><span class=line><span class=cl> <span class=k>return</span> <span class=n>mlir</span><span class=o>::</span><span class=n>failure</span><span class=p>();</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c1>// Resolve the input operand to the type we parsed in. </span></span></span><span class=line><span class=cl><span class=c1></span> <span class=k>if</span> <span class=p>(</span><span class=n>parser</span><span class=p>.</span><span class=n>resolveOperand</span><span class=p>(</span><span class=n>inputOperand</span><span class=p>,</span> <span class=n>inputType</span><span class=p>,</span> <span class=n>result</span><span class=p>.</span><span class=n>operands</span><span class=p>))</span> </span></span><span class=line><span class=cl> <span class=k>return</span> <span class=n>mlir</span><span class=o>::</span><span class=n>failure</span><span class=p>();</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=k>return</span> <span class=n>mlir</span><span class=o>::</span><span class=n>success</span><span class=p>();</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>With the C++ implementation defined, let&rsquo;s see how this can be mapped to the <a href=/docs/DefiningDialects/Operations/#declarative-assembly-format>declarative format</a>. The declarative format is largely composed of three different components:</p><ul><li>Directives<ul><li>A type of builtin function, with an optional set of arguments.</li></ul></li><li>Literals<ul><li>A keyword or punctuation surrounded by ``.</li></ul></li><li>Variables<ul><li>An entity that has been registered on the operation itself, i.e. an argument(attribute or operand), result, successor, etc. In the <code>PrintOp</code> example above, a variable would be <code>$input</code>.</li></ul></li></ul><p>A direct mapping of our C++ format looks something like:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-tablegen data-lang=tablegen><span class=line><span class=cl><span class=c>/// Consider a stripped definition of `toy.print` here. </span></span></span><span class=line><span class=cl><span class=c></span><span class=k>def</span> <span class=nv>PrintOp</span> <span class=p>:</span> <span class=nv>Toy_Op</span><span class=p>&lt;</span><span class=s>&#34;print&#34;</span><span class=p>&gt;</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=k>let</span> <span class=nv>arguments</span> <span class=p>=</span> <span class=p>(</span><span class=nv>ins</span> <span class=nv>F64Tensor</span><span class=p>:</span><span class=nv>$input</span><span class=p>);</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> <span class=c>// In the following format we have two directives, `attr-dict` and `type`. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// These correspond to the attribute dictionary and the type of a given </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// variable represectively. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=k>let</span> <span class=nv>assemblyFormat</span> <span class=p>=</span> <span class=s>&#34;$input attr-dict `:` type($input)&#34;</span><span class=p>;</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>The <a href=/docs/DefiningDialects/Operations/#declarative-assembly-format>declarative format</a> has many more interesting features, so be sure to check it out before implementing a custom format in C++. After beautifying the format of a few of our operations we now get a much more readable:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>module <span class=p>{</span> </span></span><span class=line><span class=cl> toy<span class=p>.</span><span class=kt>func</span> <span class=nf>@multiply_transpose</span><span class=p>(</span><span class=nv>%arg0</span><span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;,</span> <span class=nv>%arg1</span><span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nv>%0</span> <span class=p>=</span> toy<span class=p>.</span>transpose<span class=p>(</span><span class=nv>%arg0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;)</span> to <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>5</span><span class=p>:</span><span class=m>10</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=nv>%1</span> <span class=p>=</span> toy<span class=p>.</span>transpose<span class=p>(</span><span class=nv>%arg1</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;)</span> to <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>5</span><span class=p>:</span><span class=m>25</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=nv>%2</span> <span class=p>=</span> toy<span class=p>.</span>mul <span class=nv>%0</span><span class=p>,</span> <span class=nv>%1</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>5</span><span class=p>:</span><span class=m>25</span><span class=p>)</span> </span></span><span class=line><span class=cl> toy<span class=p>.</span><span class=kt>return</span> <span class=nv>%2</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>5</span><span class=p>:</span><span class=m>3</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=p>}</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>4</span><span class=p>:</span><span class=m>1</span><span class=p>)</span> </span></span><span class=line><span class=cl> toy<span class=p>.</span><span class=kt>func</span> <span class=nf>@main</span><span class=p>()</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nv>%0</span> <span class=p>=</span> toy<span class=p>.</span><span class=kt>constant</span> dense<span class=p>&lt;[[</span><span class=m>1.000000e+00</span><span class=p>,</span> <span class=m>2.000000e+00</span><span class=p>,</span> <span class=m>3.000000e+00</span><span class=p>],</span> <span class=p>[</span><span class=m>4.000000e+00</span><span class=p>,</span> <span class=m>5.000000e+00</span><span class=p>,</span> <span class=m>6.000000e+00</span><span class=p>]]&gt;</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>9</span><span class=p>:</span><span class=m>17</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=nv>%1</span> <span class=p>=</span> toy<span class=p>.</span>reshape<span class=p>(</span><span class=nv>%0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;)</span> to <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>9</span><span class=p>:</span><span class=m>3</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=nv>%2</span> <span class=p>=</span> toy<span class=p>.</span><span class=kt>constant</span> dense<span class=p>&lt;[</span><span class=m>1.000000e+00</span><span class=p>,</span> <span class=m>2.000000e+00</span><span class=p>,</span> <span class=m>3.000000e+00</span><span class=p>,</span> <span class=m>4.000000e+00</span><span class=p>,</span> <span class=m>5.000000e+00</span><span class=p>,</span> <span class=m>6.000000e+00</span><span class=p>]&gt;</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>6x</span><span class=k>f64</span><span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>10</span><span class=p>:</span><span class=m>17</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=nv>%3</span> <span class=p>=</span> toy<span class=p>.</span>reshape<span class=p>(</span><span class=nv>%2</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>6x</span><span class=k>f64</span><span class=p>&gt;)</span> to <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>10</span><span class=p>:</span><span class=m>3</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=nv>%4</span> <span class=p>=</span> toy<span class=p>.</span>generic_call <span class=nf>@multiply_transpose</span><span class=p>(</span><span class=nv>%1</span><span class=p>,</span> <span class=nv>%3</span><span class=p>)</span> <span class=p>:</span> <span class=p>(</span><span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;,</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>11</span><span class=p>:</span><span class=m>11</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=nv>%5</span> <span class=p>=</span> toy<span class=p>.</span>generic_call <span class=nf>@multiply_transpose</span><span class=p>(</span><span class=nv>%3</span><span class=p>,</span> <span class=nv>%1</span><span class=p>)</span> <span class=p>:</span> <span class=p>(</span><span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;,</span> <span class=kt>tensor</span><span class=p>&lt;</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>&gt;)</span> <span class=p>-&gt;</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>12</span><span class=p>:</span><span class=m>11</span><span class=p>)</span> </span></span><span class=line><span class=cl> toy<span class=p>.</span>print <span class=nv>%5</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p>&lt;*</span>xf64<span class=p>&gt;</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>13</span><span class=p>:</span><span class=m>3</span><span class=p>)</span> </span></span><span class=line><span class=cl> toy<span class=p>.</span><span class=kt>return</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>8</span><span class=p>:</span><span class=m>1</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=p>}</span> <span class=kt>loc</span><span class=p>(</span><span class=s>&#34;test/Examples/Toy/Ch2/codegen.toy&#34;</span><span class=p>:</span><span class=m>8</span><span class=p>:</span><span class=m>1</span><span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}</span> <span class=kt>loc</span><span class=p>(</span>unknown<span class=p>)</span> </span></span></code></pre></div><p>Above we introduce several of the concepts for defining operations in the ODS framework, but there are many more that we haven&rsquo;t had a chance to: regions, variadic operands, etc. Check out the <a href=/docs/DefiningDialects/Operations/>full specification</a> for more details.</p><h2 id=complete-toy-example>Complete Toy Example&nbsp;<a class=headline-hash href=#complete-toy-example>¶</a></h2><p>We can now generate our &ldquo;Toy IR&rdquo;. You can build <code>toyc-ch2</code> and try yourself on the above example: <code>toyc-ch2 test/Examples/Toy/Ch2/codegen.toy -emit=mlir -mlir-print-debuginfo</code>. We can also check our RoundTrip: <code>toyc-ch2 test/Examples/Toy/Ch2/codegen.toy -emit=mlir -mlir-print-debuginfo 2> codegen.mlir</code> followed by <code>toyc-ch2 codegen.mlir -emit=mlir</code>. You should also use <code>mlir-tblgen</code> on the final definition file and study the generated C++ code.</p><p>At this point, MLIR knows about our Toy dialect and operations. In the <a href=/docs/Tutorials/Toy/Ch-3/>next chapter</a>, we will leverage our new dialect to implement some high-level language-specific analyses and transformations for the Toy language.</p><div class=edit-meta><br></div><nav class=pagination><a class="nav nav-prev" href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-1/ title="Chapter 1: Toy Language and AST"><i class="fas fa-arrow-left" aria-hidden=true></i> Prev - Chapter 1: Toy Language and AST</a> <a class="nav nav-next" href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-3/ title="Chapter 3: High-level Language-Specific Analysis and Transformation">Next - Chapter 3: High-level Language-Specific Analysis and Transformation <i class="fas fa-arrow-right" aria-hidden=true></i></a></nav><footer><p class=powered>Powered by <a href=https://gohugo.io>Hugo</a>. Theme by <a href=https://themes.gohugo.io/hugo-theme-techdoc/>TechDoc</a>. Designed by <a href=https://github.com/thingsym/hugo-theme-techdoc>Thingsym</a>.</p></footer></main><div class=sidebar><nav class=slide-menu><ul><li><a href=https://mlir.llvm.org/>Home</a></li><li><a href=https://mlir.llvm.org/users/>Users of MLIR</a></li><li><a href=https://mlir.llvm.org/pubs/>MLIR Related Publications</a></li><li><a href=https://mlir.llvm.org/talks/>Talks</a></li><li><a href=https://mlir.llvm.org/deprecation/>Deprecations & Current Refactoring</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/getting_started/>Getting Started<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/getting_started/ReportingIssues/>Reporting Issues</a></li><li><a href=https://mlir.llvm.org/getting_started/Debugging/>Debugging Tips</a></li><li><a href=https://mlir.llvm.org/getting_started/Faq/>FAQ</a></li><li><a href=https://mlir.llvm.org/getting_started/Contributing/>How to Contribute</a></li><li><a href=https://mlir.llvm.org/getting_started/DeveloperGuide/>Developer Guide</a></li><li><a href=https://mlir.llvm.org/getting_started/openprojects/>Open Projects</a></li><li><a href=https://mlir.llvm.org/getting_started/Glossary/>Glossary</a></li><li><a href=https://mlir.llvm.org/getting_started/TestingGuide/>Testing Guide</a></li></ul></li><li class="parent has-sub-menu"><a href=https://mlir.llvm.org/docs/>Code Documentation<span class="mark opened">-</span></a><ul class=sub-menu><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Bindings/>Bindings<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Bindings/Python/>MLIR Python Bindings</a></li></ul></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Tools/>Tools<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tools/MLIRLSP/>MLIR : Language Server Protocol</a></li><li><a href=https://mlir.llvm.org/docs/Tools/mlir-reduce/>MLIR Reduce</a></li><li><a href=https://mlir.llvm.org/docs/Tools/mlir-rewrite/>mlir-rewrite</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/QuantPasses/></a></li><li><a href=https://mlir.llvm.org/docs/ActionTracing/>Action: Tracing and Debugging MLIR-based Compilers</a></li><li><a href=https://mlir.llvm.org/docs/BufferDeallocationInternals/>Buffer Deallocation - Internals</a></li><li><a href=https://mlir.llvm.org/docs/Bufferization/>Bufferization</a></li><li><a href=https://mlir.llvm.org/docs/DataLayout/>Data Layout Modeling</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/DefiningDialects/>Defining Dialects<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/DefiningDialects/Constraints/>Constraints</a></li><li><a href=https://mlir.llvm.org/docs/DefiningDialects/AttributesAndTypes/>Defining Dialect Attributes and Types</a></li><li><a href=https://mlir.llvm.org/docs/DefiningDialects/Operations/>Operation Definition Specification (ODS)</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Diagnostics/>Diagnostic Infrastructure</a></li><li><a href=https://mlir.llvm.org/docs/DialectConversion/>Dialect Conversion</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Dialects/>Dialects<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Dialects/DLTITransformOps/></a></li><li><a href=https://mlir.llvm.org/docs/Dialects/OpenACCDialect/>'acc' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Affine/>'affine' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/AMDGPU/>'amdgpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/AMX/>'amx' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArithOps/>'arith' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArmNeon/>'arm_neon' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArmSVE/>'arm_sve' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArmSME/>'ArmSME' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/AsyncDialect/>'async' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/BufferizationOps/>'bufferization' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ControlFlowDialect/>'cf' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ComplexOps/>'complex' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/DLTIDialect/>'dlti' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/EmitC/>'emitc' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Func/>'func' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/GPU/>'gpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/IndexOps/>'index' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/IRDL/>'irdl' Dialect</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Dialects/Linalg/>'linalg' Dialect<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Dialects/Linalg/OpDSL/>Linalg OpDSL</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Dialects/LLVM/>'llvm' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MathOps/>'math' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MemRef/>'memref' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Mesh/>'mesh' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MLProgramOps/>'ml_program' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MPI/>'mpi' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/NVGPU/>'nvgpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/NVVMDialect/>'nvvm' Dialect</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Dialects/OpenMPDialect/>'omp' Dialect<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Dialects/OpenMPDialect/ODS/>ODS Documentation</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Dialects/PDLInterpOps/>'pdl_interp' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/PDLOps/>'pdl' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/PolynomialDialect/>'polynomial' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/PtrOps/>'ptr' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/QuantDialect/>'quant' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ROCDLDialect/>'rocdl' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/SCFDialect/>'scf' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ShapeDialect/>'shape' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/SparseTensorOps/>'sparse_tensor' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/TensorOps/>'tensor' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/UBOps/>'ub' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/VCIXDialect/>'vcix' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Vector/>'vector' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/X86Vector/>'x86vector' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/XeGPU/>'xegpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Builtin/>Builtin Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MatchOpInterfaces/>OpInterface definitions</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/SPIR-V/>SPIR-V Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/TOSA/>Tensor Operator Set Architecture (TOSA) Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Transform/>Transform Dialect</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Interfaces/>Interfaces</a></li><li><a href=https://mlir.llvm.org/docs/TargetLLVMIR/>LLVM IR Target</a></li><li><a href=https://mlir.llvm.org/docs/BytecodeFormat/>MLIR Bytecode Format</a></li><li><a href=https://mlir.llvm.org/docs/CAPI/>MLIR C API</a></li><li><a href=https://mlir.llvm.org/docs/LangRef/>MLIR Language Reference</a></li><li><a href=https://mlir.llvm.org/docs/ReleaseNotes/>MLIR Release Notes</a></li><li><a href=https://mlir.llvm.org/docs/Canonicalization/>Operation Canonicalization</a></li><li><a href=https://mlir.llvm.org/docs/OwnershipBasedBufferDeallocation/>Ownership-based Buffer Deallocation</a></li><li><a href=https://mlir.llvm.org/docs/PassManagement/>Pass Infrastructure</a></li><li><a href=https://mlir.llvm.org/docs/Passes/>Passes</a></li><li><a href=https://mlir.llvm.org/docs/PatternRewriter/>Pattern Rewriting : Generic DAG-to-DAG Rewriting</a></li><li><a href=https://mlir.llvm.org/docs/PDLL/>PDLL - PDL Language</a></li><li><a href=https://mlir.llvm.org/docs/Quantization/>Quantization</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Rationale/>Rationale<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Rationale/RationaleGenericDAGRewriter/>Generic DAG Rewriter Infrastructure Rationale</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/RationaleLinalgDialect/>Linalg Dialect Rationale: The Case For Compiler-Friendly Custom Operations</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/Rationale/>MLIR Rationale</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/MLIRForGraphAlgorithms/>MLIR: Incremental Application to Graph Algorithms in ML Frameworks</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/RationaleSimplifiedPolyhedralForm/>MLIR: The case for a simplified polyhedral form</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/SideEffectsAndSpeculation/>Side Effects & Speculation</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/UsageOfConst/>Usage of 'const' in MLIR, for core IR types</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/ShapeInference/>Shape Inference</a></li><li><a href=https://mlir.llvm.org/docs/SPIRVToLLVMDialectConversion/>SPIR-V Dialect to LLVM Dialect conversion manual</a></li><li><a href=https://mlir.llvm.org/docs/SymbolsAndSymbolTables/>Symbols and Symbol Tables</a></li><li><a href=https://mlir.llvm.org/docs/DeclarativeRewrites/>Table-driven Declarative Rewrite Rule (DRR)</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Traits/>Traits<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Traits/Broadcastable/>The `Broadcastable` Trait</a></li></ul></li><li class="parent has-sub-menu"><a href=https://mlir.llvm.org/docs/Tutorials/>Tutorials<span class="mark opened">-</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tutorials/CreatingADialect/>Creating a Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/QuickstartRewrites/>Quickstart tutorial to adding MLIR graph rewrite</a></li><li class="parent has-sub-menu"><a href=https://mlir.llvm.org/docs/Tutorials/Toy/>Toy Tutorial<span class="mark opened">-</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-1/>Chapter 1: Toy Language and AST</a></li><li class=active><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-2/>Chapter 2: Emitting Basic MLIR</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-3/>Chapter 3: High-level Language-Specific Analysis and Transformation</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-4/>Chapter 4: Enabling Generic Transformation with Interfaces</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-5/>Chapter 5: Partial Lowering to Lower-Level Dialects for Optimization</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-6/>Chapter 6: Lowering to LLVM and CodeGeneration</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-7/>Chapter 7: Adding a Composite Type to Toy</a></li></ul></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Tutorials/transform/>Transform Dialect Tutorial<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch0/>Chapter 0: A Primer on “Structured” Linalg Operations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch1/>Chapter 1: Combining Existing Transformations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch2/>Chapter 2: Adding a Simple New Transformation Operation</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch3/>Chapter 3: More than Simple Transform Operations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch4/>Chapter 4: Matching Payload with Transform Operations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/ChH/>Chapter H: Reproducing Halide Schedule</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Tutorials/UnderstandingTheIRStructure/>Understanding the IR Structure</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/MlirOpt/>Using `mlir-opt`</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/DataFlowAnalysis/>Writing DataFlow Analyses in MLIR</a></li></ul></li></ul></li></ul></nav><div class=sidebar-footer></div></div></div><a href=# id=backtothetop-fixed class=backtothetop data-backtothetop-duration=600 data-backtothetop-easing=easeOutQuart data-backtothetop-fixed-fadein=1000 data-backtothetop-fixed-fadeout=1000 data-backtothetop-fixed-bottom=10 data-backtothetop-fixed-right=20><span class="fa-layers fa-fw"><i class="fas fa-circle"></i> <i class="fas fa-arrow-circle-up"></i></span></a></div></body></html>

Pages: 1 2 3 4 5 6 7 8 9 10