CINXE.COM
'sparse_tensor' Dialect - MLIR
<!doctype html><html lang=en-us><head><meta charset=utf-8><meta http-equiv=x-ua-compatible content="IE=edge"><meta name=viewport content="width=device-width,initial-scale=1,maximum-scale=1,user-scalable=no"><title>'sparse_tensor' Dialect - MLIR</title><meta name=description content="Multi-Level IR Compiler Framework"><meta name=generator content="Hugo 0.119.0"><link href=https://mlir.llvm.org/index.xml rel=alternate type=application/rss+xml><link rel=canonical href=https://mlir.llvm.org/docs/Dialects/SparseTensorOps/><link rel=stylesheet href=https://mlir.llvm.org/css/theme.css><script src=https://use.fontawesome.com/releases/v5.0.6/js/all.js></script> <link rel=stylesheet href=https://mlir.llvm.org/css/chroma.min.css><script src=https://cdn.jsdelivr.net/npm/jquery@3.3.1/dist/jquery.min.js></script> <script src=https://cdn.jsdelivr.net/npm/jquery.easing@1.4.1/jquery.easing.min.js></script> <script src=https://mlir.llvm.org/js/bundle.js></script> <script type=text/javascript src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> <script type=text/x-mathjax-config> MathJax.Hub.Config({ tex2jax: { inlineMath: [['$', '$'] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ] } }); </script><link rel=apple-touch-icon sizes=180x180 href="/apple-touch-icon.png?v=1"><link rel=icon type=image/png sizes=32x32 href="/favicon-32x32.png?v=1"><link rel=icon type=image/png sizes=16x16 href="/favicon-16x16.png?v=1"><link rel=manifest href="/site.webmanifest?v=1"><link rel=mask-icon href="/safari-pinned-tab.svg?v=1" color=#3775e0><link rel="shortcut icon" href="/favicon.ico?v=1"><meta name=msapplication-TileColor content="#2d89ef"><meta name=theme-color content="#ffffff"><link rel=icon href=/favicon.svg type=image/svg+xml sizes=any><style>:root{}</style></head><body><div class=container><header><h1><div><img src=https://mlir.llvm.org//mlir-logo.png width=40px align=absmiddle> MLIR</div></h1><p class=description>Multi-Level IR Compiler Framework</p></header><div class=global-menu><nav><ul><li class=parent><a href>Community<i class="fas fa-angle-right"></i></a><ul class=sub-menu><li class=child><a href=https://llvm.discourse.group/c/mlir/31>Forums</a></li><li class=child><a href=https://discord.gg/xS7Z362>Chat</a></li></ul></li><li><a href=/getting_started/Debugging/>Debugging Tips</a></li><li><a href=/getting_started/Faq/>FAQ</a></li><li class=parent><a href=https://github.com/llvm/llvm-project/tree/main/mlir>Source<i class="fas fa-angle-right"></i></a><ul class=sub-menu><li class=child><a href=/doxygen/>Doxygen</a></li><li class=child><a href=https://github.com/llvm/llvm-project/tree/main/mlir>GitHub</a></li></ul></li><li><a href="https://bugs.llvm.org/buglist.cgi?bug_status=__open__&list_id=177877&order=changeddate%20DESC%2Cpriority%2Cbug_severity&product=MLIR&query_format=specific">Bugs</a></li><li><a href=https://github.com/llvm/mlir-www/tree/main/website/static/LogoAssets>Logo Assets</a></li><li><a href=https://www.youtube.com/MLIRCompiler>Youtube Channel</a></li></ul></nav></div><div class=content-container><main><h1>'sparse_tensor' Dialect</h1><p>The <code>SparseTensor</code> dialect supports all the attributes, types, operations, and passes that are required to make sparse tensor types first class citizens within the MLIR compiler infrastructure. The dialect forms a bridge between high-level operations on sparse tensors types and lower-level operations on the actual sparse storage schemes consisting of positions, coordinates, and values. Lower-level support may consist of fully generated code or may be provided by means of a small sparse runtime support library.</p><p>The concept of <strong>treating sparsity as a property, not a tedious implementation detail</strong>, by letting a <strong>sparsifier</strong> generate sparse code automatically was pioneered for linear algebra by [Bik96] in MT1 (see <a href=https://www.aartbik.com/sparse.php>https://www.aartbik.com/sparse.php</a>) and formalized to tensor algebra by [Kjolstad17,Kjolstad20] in the Sparse Tensor Algebra Compiler (TACO) project (see <a href=http://tensor-compiler.org>http://tensor-compiler.org</a>). Please note that we started to prefer the term “sparsifier” over the also commonly used “sparse compiler” terminology to refer to such a pass to make it clear that the sparsifier pass is not a separate compiler, but should be an integral part of any compiler pipeline that is built with the MLIR compiler infrastructure</p><p>The MLIR implementation [Biketal22] closely follows the “sparse iteration theory” that forms the foundation of TACO. A rewriting rule is applied to each tensor expression in the Linalg dialect (MLIR’s tensor index notation) where the sparsity of tensors is indicated using the per-level level-types (e.g., dense, compressed, singleton) together with a specification of the order on the levels (see [Chou18] for an in-depth discussions and possible extensions to these level-types). Subsequently, a topologically sorted iteration graph, reflecting the required order on coordinates with respect to the levels of each tensor, is constructed to ensure that all tensors are visited in natural level-coordinate order. Next, iteration lattices are constructed for the tensor expression for every index in topological order. Each iteration lattice point consists of a conjunction of tensor coordinates together with a tensor (sub)expression that needs to be evaluated for that conjunction. Within the lattice, iteration points are ordered according to the way coordinates are exhausted. As such these iteration lattices drive actual sparse code generation, which consists of a relatively straightforward one-to-one mapping from iteration lattices to combinations of for-loops, while-loops, and if-statements. Sparse tensor outputs that materialize uninitialized are handled with direct insertions if all parallel loops are outermost or insertions that indirectly go through a 1-dimensional access pattern expansion (a.k.a. workspace) where feasible [Gustavson72,Bik96,Kjolstad19].</p><ul><li>[Bik96] Aart J.C. Bik. Compiler Support for Sparse Matrix Computations. PhD thesis, Leiden University, May 1996.</li><li>[Biketal22] Aart J.C. Bik, Penporn Koanantakool, Tatiana Shpeisman, Nicolas Vasilache, Bixia Zheng, and Fredrik Kjolstad. Compiler Support for Sparse Tensor Computations in MLIR. ACM Transactions on Architecture and Code Optimization, June, 2022. See: <a href=https://dl.acm.org/doi/10.1145/3544559>https://dl.acm.org/doi/10.1145/3544559</a></li><li>[Chou18] Stephen Chou, Fredrik Berg Kjolstad, and Saman Amarasinghe. Format Abstraction for Sparse Tensor Algebra Compilers. Proceedings of the ACM on Programming Languages, October 2018.</li><li>[Chou20] Stephen Chou, Fredrik Berg Kjolstad, and Saman Amarasinghe. Automatic Generation of Efficient Sparse Tensor Format Conversion Routines. Proceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation, June, 2020.</li><li>[Gustavson72] Fred G. Gustavson. Some basic techniques for solving sparse systems of linear equations. In Sparse Matrices and Their Applications, pages 41–52. Plenum Press, New York, 1972.</li><li>[Kjolstad17] Fredrik Berg Kjolstad, Shoaib Ashraf Kamil, Stephen Chou, David Lugato, and Saman Amarasinghe. The Tensor Algebra Compiler. Proceedings of the ACM on Programming Languages, October 2017.</li><li>[Kjolstad19] Fredrik Berg Kjolstad, Peter Ahrens, Shoaib Ashraf Kamil, and Saman Amarasinghe. Tensor Algebra Compilation with Workspaces, Proceedings of the IEEE/ACM International Symposium on Code Generation and Optimization, 2019.</li><li>[Kjolstad20] Fredrik Berg Kjolstad. Sparse Tensor Algebra Compilation. PhD thesis, MIT, February, 2020.</li></ul><p><nav id=TableOfContents><ul><li><a href=#operations>Operations</a><ul><li><a href=#sparse_tensorassemble-sparse_tensorassembleop><code>sparse_tensor.assemble</code> (sparse_tensor::AssembleOp)</a></li><li><a href=#sparse_tensorbinary-sparse_tensorbinaryop><code>sparse_tensor.binary</code> (sparse_tensor::BinaryOp)</a></li><li><a href=#sparse_tensorcoiterate-sparse_tensorcoiterateop><code>sparse_tensor.coiterate</code> (sparse_tensor::CoIterateOp)</a></li><li><a href=#sparse_tensorcompress-sparse_tensorcompressop><code>sparse_tensor.compress</code> (sparse_tensor::CompressOp)</a></li><li><a href=#sparse_tensorconcatenate-sparse_tensorconcatenateop><code>sparse_tensor.concatenate</code> (sparse_tensor::ConcatenateOp)</a></li><li><a href=#sparse_tensorconvert-sparse_tensorconvertop><code>sparse_tensor.convert</code> (sparse_tensor::ConvertOp)</a></li><li><a href=#sparse_tensorcoordinates-sparse_tensortocoordinatesop><code>sparse_tensor.coordinates</code> (sparse_tensor::ToCoordinatesOp)</a></li><li><a href=#sparse_tensorcoordinates_buffer-sparse_tensortocoordinatesbufferop><code>sparse_tensor.coordinates_buffer</code> (sparse_tensor::ToCoordinatesBufferOp)</a></li><li><a href=#sparse_tensorcrd_translate-sparse_tensorcrdtranslateop><code>sparse_tensor.crd_translate</code> (sparse_tensor::CrdTranslateOp)</a></li><li><a href=#sparse_tensordisassemble-sparse_tensordisassembleop><code>sparse_tensor.disassemble</code> (sparse_tensor::DisassembleOp)</a></li><li><a href=#sparse_tensorexpand-sparse_tensorexpandop><code>sparse_tensor.expand</code> (sparse_tensor::ExpandOp)</a></li><li><a href=#sparse_tensorextract_iteration_space-sparse_tensorextractiterspaceop><code>sparse_tensor.extract_iteration_space</code> (sparse_tensor::ExtractIterSpaceOp)</a></li><li><a href=#sparse_tensorextract_value-sparse_tensorextractvalop><code>sparse_tensor.extract_value</code> (sparse_tensor::ExtractValOp)</a></li><li><a href=#sparse_tensorforeach-sparse_tensorforeachop><code>sparse_tensor.foreach</code> (sparse_tensor::ForeachOp)</a></li><li><a href=#sparse_tensorhas_runtime_library-sparse_tensorhasruntimelibraryop><code>sparse_tensor.has_runtime_library</code> (sparse_tensor::HasRuntimeLibraryOp)</a></li><li><a href=#sparse_tensoriterate-sparse_tensoriterateop><code>sparse_tensor.iterate</code> (sparse_tensor::IterateOp)</a></li><li><a href=#sparse_tensorload-sparse_tensorloadop><code>sparse_tensor.load</code> (sparse_tensor::LoadOp)</a></li><li><a href=#sparse_tensorlvl-sparse_tensorlvlop><code>sparse_tensor.lvl</code> (sparse_tensor::LvlOp)</a></li><li><a href=#sparse_tensornew-sparse_tensornewop><code>sparse_tensor.new</code> (sparse_tensor::NewOp)</a></li><li><a href=#sparse_tensornumber_of_entries-sparse_tensornumberofentriesop><code>sparse_tensor.number_of_entries</code> (sparse_tensor::NumberOfEntriesOp)</a></li><li><a href=#sparse_tensorout-sparse_tensoroutop><code>sparse_tensor.out</code> (sparse_tensor::OutOp)</a></li><li><a href=#sparse_tensorpositions-sparse_tensortopositionsop><code>sparse_tensor.positions</code> (sparse_tensor::ToPositionsOp)</a></li><li><a href=#sparse_tensorprint-sparse_tensorprintop><code>sparse_tensor.print</code> (sparse_tensor::PrintOp)</a></li><li><a href=#sparse_tensorpush_back-sparse_tensorpushbackop><code>sparse_tensor.push_back</code> (sparse_tensor::PushBackOp)</a></li><li><a href=#sparse_tensorreduce-sparse_tensorreduceop><code>sparse_tensor.reduce</code> (sparse_tensor::ReduceOp)</a></li><li><a href=#sparse_tensorreinterpret_map-sparse_tensorreinterpretmapop><code>sparse_tensor.reinterpret_map</code> (sparse_tensor::ReinterpretMapOp)</a></li><li><a href=#sparse_tensorreorder_coo-sparse_tensorreordercooop><code>sparse_tensor.reorder_coo</code> (sparse_tensor::ReorderCOOOp)</a></li><li><a href=#sparse_tensorselect-sparse_tensorselectop><code>sparse_tensor.select</code> (sparse_tensor::SelectOp)</a></li><li><a href=#sparse_tensorsliceoffset-sparse_tensortosliceoffsetop><code>sparse_tensor.slice.offset</code> (sparse_tensor::ToSliceOffsetOp)</a></li><li><a href=#sparse_tensorslicestride-sparse_tensortoslicestrideop><code>sparse_tensor.slice.stride</code> (sparse_tensor::ToSliceStrideOp)</a></li><li><a href=#sparse_tensorsort-sparse_tensorsortop><code>sparse_tensor.sort</code> (sparse_tensor::SortOp)</a></li><li><a href=#sparse_tensorstorage_specifierget-sparse_tensorgetstoragespecifierop><code>sparse_tensor.storage_specifier.get</code> (sparse_tensor::GetStorageSpecifierOp)</a></li><li><a href=#sparse_tensorstorage_specifierinit-sparse_tensorstoragespecifierinitop><code>sparse_tensor.storage_specifier.init</code> (sparse_tensor::StorageSpecifierInitOp)</a></li><li><a href=#sparse_tensorstorage_specifierset-sparse_tensorsetstoragespecifierop><code>sparse_tensor.storage_specifier.set</code> (sparse_tensor::SetStorageSpecifierOp)</a></li><li><a href=#sparse_tensorunary-sparse_tensorunaryop><code>sparse_tensor.unary</code> (sparse_tensor::UnaryOp)</a></li><li><a href=#sparse_tensorvalues-sparse_tensortovaluesop><code>sparse_tensor.values</code> (sparse_tensor::ToValuesOp)</a></li><li><a href=#sparse_tensoryield-sparse_tensoryieldop><code>sparse_tensor.yield</code> (sparse_tensor::YieldOp)</a></li></ul></li><li><a href=#attributes-17>Attributes</a><ul><li><a href=#crdtransdirectionkindattr>CrdTransDirectionKindAttr</a></li><li><a href=#sparsetensordimsliceattr>SparseTensorDimSliceAttr</a></li><li><a href=#sparsetensorencodingattr>SparseTensorEncodingAttr</a></li><li><a href=#sparsetensorsortkindattr>SparseTensorSortKindAttr</a></li><li><a href=#storagespecifierkindattr>StorageSpecifierKindAttr</a></li></ul></li><li><a href=#types>Types</a><ul><li><a href=#iterspacetype>IterSpaceType</a></li><li><a href=#iteratortype>IteratorType</a></li><li><a href=#storagespecifiertype>StorageSpecifierType</a></li></ul></li><li><a href=#enums>Enums</a><ul><li><a href=#crdtransdirectionkind>CrdTransDirectionKind</a></li><li><a href=#sparsetensorsortkind>SparseTensorSortKind</a></li><li><a href=#storagespecifierkind>StorageSpecifierKind</a></li></ul></li></ul></nav><h2 id=operations>Operations <a class=headline-hash href=#operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td>source</a></p><h3 id=sparse_tensorassemble-sparse_tensorassembleop><code>sparse_tensor.assemble</code> (sparse_tensor::AssembleOp) <a class=headline-hash href=#sparse_tensorassemble-sparse_tensorassembleop>¶</a></h3><p><em>Returns a sparse tensor assembled from the given levels and values</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.assemble` ` ` `(` $levels `)` `,` $values attr-dict `:` `(` type($levels) `)` `,` type($values) `to` type($result) </code></pre><p>Assembles the per-level position and coordinate arrays together with the values arrays into a sparse tensor. The order and types of the provided levels must be consistent with the actual storage layout of the returned sparse tensor described below.</p><ul><li><code>levels: [tensor<? x iType>, ...]</code> supplies the sparse tensor position and coordinate arrays of the sparse tensor for the corresponding level as specifed by <code>sparse_tensor::StorageLayout</code>.</li><li><code>values : tensor<? x V></code> supplies the values array for the stored elements in the sparse tensor.</li></ul><p>This operation can be used to assemble a sparse tensor from an external source; e.g., by passing numpy arrays from Python. It is the user’s responsibility to provide input that can be correctly interpreted by the sparsifier, which does not perform any sanity test to verify data integrity.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%pos</span> <span class=p>=</span> arith<span class=p>.</span><span class=kt>constant</span> dense<span class=p><[</span><span class=m>0</span><span class=p>,</span> <span class=m>3</span><span class=p>]></span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>2x</span><span class=k>index</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%index</span> <span class=p>=</span> arith<span class=p>.</span><span class=kt>constant</span> dense<span class=p><[[</span><span class=m>0</span><span class=p>,</span><span class=m>0</span><span class=p>],</span> <span class=p>[</span><span class=m>1</span><span class=p>,</span><span class=m>2</span><span class=p>],</span> <span class=p>[</span><span class=m>1</span><span class=p>,</span><span class=m>3</span><span class=p>]]></span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>3x2x</span><span class=k>index</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%values</span> <span class=p>=</span> arith<span class=p>.</span><span class=kt>constant</span> dense<span class=p><[</span> <span class=m>1.1</span><span class=p>,</span> <span class=m>2.2</span><span class=p>,</span> <span class=m>3.3</span> <span class=p>]></span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>3x</span><span class=k>f64</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%s</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>assemble <span class=p>(</span><span class=nv>%pos</span><span class=p>,</span> <span class=nv>%index</span><span class=p>),</span> <span class=nv>%values</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>(</span><span class=kt>tensor</span><span class=p><</span><span class=m>2x</span><span class=k>index</span><span class=p>>,</span> <span class=kt>tensor</span><span class=p><</span><span class=m>3x2x</span><span class=k>index</span><span class=p>>),</span> <span class=kt>tensor</span><span class=p><</span><span class=m>3x</span><span class=k>f64</span><span class=p>></span> to <span class=kt>tensor</span><span class=p><</span><span class=m>3x4x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#COO</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=c>// yields COO format |1.1, 0.0, 0.0, 0.0| </span></span></span><span class=line><span class=cl><span class=c>// of 3x4 matrix |0.0, 0.0, 2.2, 3.3| </span></span></span><span class=line><span class=cl><span class=c>// |0.0, 0.0, 0.0, 0.0| </span></span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=operands>Operands: <a class=headline-hash href=#operands>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>levels</code></td><td>variadic of ranked tensor of signless integer or index values</td></tr><tr><td style=text-align:center><code>values</code></td><td>ranked tensor of any type values</td></tr></tbody></table><h4 id=results>Results: <a class=headline-hash href=#results>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>sparse tensor of any type values</td></tr></tbody></table><h3 id=sparse_tensorbinary-sparse_tensorbinaryop><code>sparse_tensor.binary</code> (sparse_tensor::BinaryOp) <a class=headline-hash href=#sparse_tensorbinary-sparse_tensorbinaryop>¶</a></h3><p><em>Binary set operation utilized within linalg.generic</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.binary` $x `,` $y `:` attr-dict type($x) `,` type($y) `to` type($output) `\n` `overlap` `=` $overlapRegion `\n` `left` `=` (`identity` $left_identity^):($leftRegion)? `\n` `right` `=` (`identity` $right_identity^):($rightRegion)? </code></pre><p>Defines a computation within a <code>linalg.generic</code> operation that takes two operands and executes one of the regions depending on whether both operands or either operand is nonzero (i.e. stored explicitly in the sparse storage format).</p><p>Three regions are defined for the operation and must appear in this order:</p><ul><li>overlap (elements present in both sparse tensors)</li><li>left (elements only present in the left sparse tensor)</li><li>right (element only present in the right sparse tensor)</li></ul><p>Each region contains a single block describing the computation and result. Every non-empty block must end with a sparse_tensor.yield and the return type must match the type of <code>output</code>. The primary region’s block has two arguments, while the left and right region’s block has only one argument.</p><p>A region may also be declared empty (i.e. <code>left={}</code>), indicating that the region does not contribute to the output. For example, setting both <code>left={}</code> and <code>right={}</code> is equivalent to the intersection of the two inputs as only the overlap region will contribute values to the output.</p><p>As a convenience, there is also a special token <code>identity</code> which can be used in place of the left or right region. This token indicates that the return value is the input value (i.e. func(%x) => return %x). As a practical example, setting <code>left=identity</code> and <code>right=identity</code> would be equivalent to a union operation where non-overlapping values in the inputs are copied to the output unchanged.</p><p>Due to the possibility of empty regions, i.e. lack of a value for certain cases, the result of this operation may only feed directly into the output of the <code>linalg.generic</code> operation or into into a custom reduction <code>sparse_tensor.reduce</code> operation that follows in the same region.</p><p>Example of isEqual applied to intersecting elements only:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%C</span> <span class=p>=</span> <span class=kt>tensor</span><span class=p>.</span>empty<span class=p>(...)</span> </span></span><span class=line><span class=cl><span class=nv>%0</span> <span class=p>=</span> linalg<span class=p>.</span>generic <span class=nv>#trait</span> </span></span><span class=line><span class=cl> ins<span class=p>(</span><span class=nv>%A</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>>,</span> </span></span><span class=line><span class=cl> <span class=nv>%B</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>>)</span> </span></span><span class=line><span class=cl> outs<span class=p>(</span><span class=nv>%C</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>i8</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%a</span><span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=nv>%b</span><span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=nv>%c</span><span class=p>:</span> <span class=k>i8</span><span class=p>)</span> <span class=p>:</span> </span></span><span class=line><span class=cl> <span class=nv>%result</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>binary <span class=nv>%a</span><span class=p>,</span> <span class=nv>%b</span> <span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=k>f64</span> to <span class=k>i8</span> </span></span><span class=line><span class=cl> <span class=nl>overlap=</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%arg0</span><span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=nv>%arg1</span><span class=p>:</span> <span class=k>f64</span><span class=p>):</span> </span></span><span class=line><span class=cl> <span class=nv>%cmp</span> <span class=p>=</span> arith<span class=p>.</span>cmpf <span class=s>"oeq"</span><span class=p>,</span> <span class=nv>%arg0</span><span class=p>,</span> <span class=nv>%arg1</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> <span class=nv>%ret_i8</span> <span class=p>=</span> arith<span class=p>.</span>extui <span class=nv>%cmp</span> <span class=p>:</span> <span class=k>i1</span> to <span class=k>i8</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>yield <span class=nv>%ret_i8</span> <span class=p>:</span> <span class=k>i8</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=nl>left=</span><span class=p>{}</span> </span></span><span class=line><span class=cl> <span class=nl>right=</span><span class=p>{}</span> </span></span><span class=line><span class=cl> linalg<span class=p>.</span>yield <span class=nv>%result</span> <span class=p>:</span> <span class=k>i8</span> </span></span><span class=line><span class=cl><span class=p>}</span> <span class=p>-></span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>i8</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>></span> </span></span></code></pre></div><p>Example of A+B in upper triangle, A-B in lower triangle:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%C</span> <span class=p>=</span> <span class=kt>tensor</span><span class=p>.</span>empty<span class=p>(...)</span> </span></span><span class=line><span class=cl><span class=nv>%1</span> <span class=p>=</span> linalg<span class=p>.</span>generic <span class=nv>#trait</span> </span></span><span class=line><span class=cl> ins<span class=p>(</span><span class=nv>%A</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>>,</span> <span class=nv>%B</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span><span class=line><span class=cl> outs<span class=p>(</span><span class=nv>%C</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%a</span><span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=nv>%b</span><span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=nv>%c</span><span class=p>:</span> <span class=k>f64</span><span class=p>)</span> <span class=p>:</span> </span></span><span class=line><span class=cl> <span class=nv>%row</span> <span class=p>=</span> linalg<span class=p>.</span><span class=k>index</span> <span class=m>0</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl> <span class=nv>%col</span> <span class=p>=</span> linalg<span class=p>.</span><span class=k>index</span> <span class=m>1</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl> <span class=nv>%result</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>binary <span class=nv>%a</span><span class=p>,</span> <span class=nv>%b</span> <span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=k>f64</span> to <span class=k>f64</span> </span></span><span class=line><span class=cl> <span class=nl>overlap=</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%x</span><span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=nv>%y</span><span class=p>:</span> <span class=k>f64</span><span class=p>):</span> </span></span><span class=line><span class=cl> <span class=nv>%cmp</span> <span class=p>=</span> arith<span class=p>.</span>cmpi <span class=s>"uge"</span><span class=p>,</span> <span class=nv>%col</span><span class=p>,</span> <span class=nv>%row</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl> <span class=nv>%upperTriangleResult</span> <span class=p>=</span> arith<span class=p>.</span>addf <span class=nv>%x</span><span class=p>,</span> <span class=nv>%y</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> <span class=nv>%lowerTriangleResult</span> <span class=p>=</span> arith<span class=p>.</span>subf <span class=nv>%x</span><span class=p>,</span> <span class=nv>%y</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> <span class=nv>%ret</span> <span class=p>=</span> arith<span class=p>.</span>select <span class=nv>%cmp</span><span class=p>,</span> <span class=nv>%upperTriangleResult</span><span class=p>,</span> <span class=nv>%lowerTriangleResult</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>yield <span class=nv>%ret</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=nl>left=</span>identity </span></span><span class=line><span class=cl> <span class=nl>right=</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%y</span><span class=p>:</span> <span class=k>f64</span><span class=p>):</span> </span></span><span class=line><span class=cl> <span class=nv>%cmp</span> <span class=p>=</span> arith<span class=p>.</span>cmpi <span class=s>"uge"</span><span class=p>,</span> <span class=nv>%col</span><span class=p>,</span> <span class=nv>%row</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl> <span class=nv>%lowerTriangleResult</span> <span class=p>=</span> arith<span class=p>.</span>negf <span class=nv>%y</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> <span class=nv>%ret</span> <span class=p>=</span> arith<span class=p>.</span>select <span class=nv>%cmp</span><span class=p>,</span> <span class=nv>%y</span><span class=p>,</span> <span class=nv>%lowerTriangleResult</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>yield <span class=nv>%ret</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> linalg<span class=p>.</span>yield <span class=nv>%result</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl><span class=p>}</span> <span class=p>-></span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span></code></pre></div><p>Example of set difference. Returns a copy of A where its sparse structure is <em>not</em> overlapped by B. The element type of B can be different than A because we never use its values, only its sparse structure:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%C</span> <span class=p>=</span> <span class=kt>tensor</span><span class=p>.</span>empty<span class=p>(...)</span> </span></span><span class=line><span class=cl><span class=nv>%2</span> <span class=p>=</span> linalg<span class=p>.</span>generic <span class=nv>#trait</span> </span></span><span class=line><span class=cl> ins<span class=p>(</span><span class=nv>%A</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>>,</span> <span class=nv>%B</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>i32</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span><span class=line><span class=cl> outs<span class=p>(</span><span class=nv>%C</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%a</span><span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=nv>%b</span><span class=p>:</span> <span class=k>i32</span><span class=p>,</span> <span class=nv>%c</span><span class=p>:</span> <span class=k>f64</span><span class=p>)</span> <span class=p>:</span> </span></span><span class=line><span class=cl> <span class=nv>%result</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>binary <span class=nv>%a</span><span class=p>,</span> <span class=nv>%b</span> <span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=k>i32</span> to <span class=k>f64</span> </span></span><span class=line><span class=cl> <span class=nl>overlap=</span><span class=p>{}</span> </span></span><span class=line><span class=cl> <span class=nl>left=</span>identity </span></span><span class=line><span class=cl> <span class=nl>right=</span><span class=p>{}</span> </span></span><span class=line><span class=cl> linalg<span class=p>.</span>yield <span class=nv>%result</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl><span class=p>}</span> <span class=p>-></span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=attributes>Attributes: <a class=headline-hash href=#attributes>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>left_identity</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>right_identity</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-1>Operands: <a class=headline-hash href=#operands-1>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>x</code></td><td>any type</td></tr><tr><td style=text-align:center><code>y</code></td><td>any type</td></tr></tbody></table><h4 id=results-1>Results: <a class=headline-hash href=#results-1>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>output</code></td><td>any type</td></tr></tbody></table><h3 id=sparse_tensorcoiterate-sparse_tensorcoiterateop><code>sparse_tensor.coiterate</code> (sparse_tensor::CoIterateOp) <a class=headline-hash href=#sparse_tensorcoiterate-sparse_tensorcoiterateop>¶</a></h3><p><em>Co-iterates over a set of sparse iteration spaces</em></p><p>The <code>sparse_tensor.coiterate</code> operation represents a loop (nest) over a set of iteration spaces. The operation can have multiple regions, with each of them defining a case to compute a result at the current iterations. The case condition is defined solely based on the pattern of specified iterators. For example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%ret</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>coiterate <span class=p>(</span><span class=nv>%sp1</span><span class=p>,</span> <span class=nv>%sp2</span><span class=p>)</span> at<span class=p>(</span><span class=nv>%coord</span><span class=p>)</span> iter_args<span class=p>(</span><span class=nv>%arg</span> <span class=p>=</span> <span class=nv>%init</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>(!</span>sparse_tensor<span class=p>.</span>iter_space<span class=p><</span><span class=nv>#CSR</span><span class=p>,</span> <span class=nl>lvls =</span> <span class=m>0</span><span class=p>>,</span> </span></span><span class=line><span class=cl> <span class=p>!</span>sparse_tensor<span class=p>.</span>iter_space<span class=p><</span><span class=nv>#COO</span><span class=p>,</span> <span class=nl>lvls =</span> <span class=m>0</span><span class=p>>)</span> </span></span><span class=line><span class=cl> <span class=p>-></span> <span class=k>index</span> </span></span><span class=line><span class=cl>case <span class=nv>%it1</span><span class=p>,</span> _ <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=c>// %coord is specifed in space %sp1 but *NOT* specified in space %sp2. </span></span></span><span class=line><span class=cl><span class=c></span><span class=p>}</span> </span></span><span class=line><span class=cl>case <span class=nv>%it1</span><span class=p>,</span> <span class=nv>%it2</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=c>// %coord is specifed in *BOTH* spaces %sp1 and %sp2. </span></span></span><span class=line><span class=cl><span class=c></span><span class=p>}</span> </span></span></code></pre></div><p><code>sparse_tensor.coiterate</code> can also operate on loop-carried variables. It returns the final value for each loop-carried variable after loop termination. The initial values of the variables are passed as additional SSA operands to the iterator SSA value and used coordinate SSA values. Each operation region has variadic arguments for specified (used), one argument for each loop-carried variable, representing the value of the variable at the current iteration, followed by a list of arguments for iterators. The body region must contain exactly one block that terminates with <code>sparse_tensor.yield</code>.</p><p>The results of an <code>sparse_tensor.coiterate</code> hold the final values after the last iteration. If the <code>sparse_tensor.coiterate</code> defines any values, a yield must be explicitly present in every region defined in the operation. The number and types of the <code>sparse_tensor.coiterate</code> results must match the initial values in the iter_args binding and the yield operands.</p><p>A <code>sparse_tensor.coiterate</code> example that does elementwise addition between two sparse vectors.</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%ret</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>coiterate <span class=p>(</span><span class=nv>%sp1</span><span class=p>,</span> <span class=nv>%sp2</span><span class=p>)</span> at<span class=p>(</span><span class=nv>%coord</span><span class=p>)</span> iter_args<span class=p>(</span><span class=nv>%arg</span> <span class=p>=</span> <span class=nv>%init</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>(!</span>sparse_tensor<span class=p>.</span>iter_space<span class=p><</span><span class=nv>#CSR</span><span class=p>,</span> <span class=nl>lvls =</span> <span class=m>0</span><span class=p>>,</span> </span></span><span class=line><span class=cl> <span class=p>!</span>sparse_tensor<span class=p>.</span>iter_space<span class=p><</span><span class=nv>#CSR</span><span class=p>,</span> <span class=nl>lvls =</span> <span class=m>0</span><span class=p>>)</span> </span></span><span class=line><span class=cl> <span class=p>-></span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>index</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span><span class=line><span class=cl>case <span class=nv>%it1</span><span class=p>,</span> _ <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=c>// v = v1 + 0 = v1 </span></span></span><span class=line><span class=cl><span class=c></span> <span class=nv>%v1</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>extract_value <span class=nv>%t1</span> at <span class=nv>%it1</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl> <span class=nv>%yield</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>insert <span class=nv>%v1</span> into <span class=nv>%arg</span><span class=p>[</span><span class=nv>%coord</span><span class=p>]</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>yield <span class=nv>%yield</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span><span class=line><span class=cl>case _<span class=p>,</span> <span class=nv>%it2</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=c>// v = v2 + 0 = v2 </span></span></span><span class=line><span class=cl><span class=c></span> <span class=nv>%v2</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>extract_value <span class=nv>%t2</span> at <span class=nv>%it2</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl> <span class=nv>%yield</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>insert <span class=nv>%v1</span> into <span class=nv>%arg</span><span class=p>[</span><span class=nv>%coord</span><span class=p>]</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>yield <span class=nv>%yield</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span><span class=line><span class=cl>case <span class=nv>%it1</span><span class=p>,</span> <span class=nv>%it2</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=c>// v = v1 + v2 </span></span></span><span class=line><span class=cl><span class=c></span> <span class=nv>%v1</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>extract_value <span class=nv>%t1</span> at <span class=nv>%it1</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl> <span class=nv>%v2</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>extract_value <span class=nv>%t2</span> at <span class=nv>%it2</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl> <span class=nv>%v</span> <span class=p>=</span> arith<span class=p>.</span>addi <span class=nv>%v1</span><span class=p>,</span> <span class=nv>%v2</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl> <span class=nv>%yield</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>insert <span class=nv>%v</span> into <span class=nv>%arg</span><span class=p>[</span><span class=nv>%coord</span><span class=p>]</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>yield <span class=nv>%yield</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>Traits: <code>AttrSizedOperandSegments</code>, <code>RecursiveMemoryEffects</code>, <code>SingleBlockImplicitTerminator<sparse_tensor::YieldOp></code>, <code>SingleBlock</code></p><h4 id=attributes-1>Attributes: <a class=headline-hash href=#attributes-1>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>crdUsedLvls</code></td><td>::mlir::IntegerAttr</td><td>LevelSet attribute</td></tr><tr><td><code>cases</code></td><td>::mlir::ArrayAttr</td><td>I64BitSet array attribute</td></tr></table><h4 id=operands-2>Operands: <a class=headline-hash href=#operands-2>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>iterSpaces</code></td><td>variadic of sparse iteration space</td></tr><tr><td style=text-align:center><code>initArgs</code></td><td>variadic of any type</td></tr></tbody></table><h4 id=results-2>Results: <a class=headline-hash href=#results-2>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>results</code></td><td>variadic of any type</td></tr></tbody></table><h3 id=sparse_tensorcompress-sparse_tensorcompressop><code>sparse_tensor.compress</code> (sparse_tensor::CompressOp) <a class=headline-hash href=#sparse_tensorcompress-sparse_tensorcompressop>¶</a></h3><p><em>Compressed an access pattern for insertion</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.compress` $values `,` $filled `,` $added `,` $count `into` $tensor `[` $lvlCoords `]` attr-dict `:` type($values) `,` type($filled) `,` type($added) `,` type($tensor) </code></pre><p>Finishes a single access pattern expansion by moving inserted elements into the sparse storage scheme of the given tensor with the given level-coordinates. The arity of <code>lvlCoords</code> is one less than the level-rank of the tensor, with the coordinate of the innermost level defined through the <code>added</code> array. The <code>values</code> and <code>filled</code> arrays are reset in a <em>sparse</em> fashion by only iterating over set elements through an indirection using the <code>added</code> array, so that the operations are kept proportional to the number of nonzeros. See the <code>sparse_tensor.expand</code> operation for more details.</p><p>Note that this operation is “impure” in the sense that even though the result is modeled through an SSA value, the insertion is eventually done “in place”, and referencing the old SSA value is undefined behavior.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%result</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>compress <span class=nv>%values</span><span class=p>,</span> <span class=nv>%filled</span><span class=p>,</span> <span class=nv>%added</span><span class=p>,</span> <span class=nv>%count</span> into <span class=nv>%tensor</span><span class=p>[</span><span class=nv>%i</span><span class=p>]</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=kt>memref</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>>,</span> <span class=kt>memref</span><span class=p><</span><span class=m>?x</span><span class=k>i1</span><span class=p>>,</span> <span class=kt>memref</span><span class=p><</span><span class=m>?x</span><span class=k>index</span><span class=p>>,</span> <span class=kt>tensor</span><span class=p><</span><span class=m>4x4x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span></code></pre></div><p>Interfaces: <code>InferTypeOpInterface</code></p><h4 id=operands-3>Operands: <a class=headline-hash href=#operands-3>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>values</code></td><td>strided memref of any type values of rank 1</td></tr><tr><td style=text-align:center><code>filled</code></td><td>1D memref of 1-bit signless integer values</td></tr><tr><td style=text-align:center><code>added</code></td><td>1D memref of index values</td></tr><tr><td style=text-align:center><code>count</code></td><td>index</td></tr><tr><td style=text-align:center><code>tensor</code></td><td>sparse tensor of any type values</td></tr><tr><td style=text-align:center><code>lvlCoords</code></td><td>variadic of index</td></tr></tbody></table><h4 id=results-3>Results: <a class=headline-hash href=#results-3>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>sparse tensor of any type values</td></tr></tbody></table><h3 id=sparse_tensorconcatenate-sparse_tensorconcatenateop><code>sparse_tensor.concatenate</code> (sparse_tensor::ConcatenateOp) <a class=headline-hash href=#sparse_tensorconcatenate-sparse_tensorconcatenateop>¶</a></h3><p><em>Concatenates a list of tensors into a single tensor.</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.concatenate` $inputs attr-dict `:` type($inputs) `to` type($result) </code></pre><p>Concatenates a list input tensors and the output tensor with the same dimension-rank. The concatenation happens on the specified <code>dimension</code> (0 <= dimension < dimRank). The resulting <code>dimension</code> size is the sum of all the input sizes for that dimension, while all the other dimensions should have the same size in the input and output tensors.</p><p>Only statically-sized input tensors are accepted, while the output tensor can be dynamically-sized.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%0</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>concatenate <span class=nv>%1</span><span class=p>,</span> <span class=nv>%2</span> <span class=p>{</span> <span class=nl>dimension =</span> <span class=m>0</span> <span class=p>:</span> <span class=k>index</span> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>64x64x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>>,</span> <span class=kt>tensor</span><span class=p><</span><span class=m>64x64x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> to <span class=kt>tensor</span><span class=p><</span><span class=m>128x64x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code>, <code>StageWithSortSparseOpInterface</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=attributes-2>Attributes: <a class=headline-hash href=#attributes-2>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>dimension</code></td><td>::mlir::IntegerAttr</td><td>dimension attribute</td></tr></table><h4 id=operands-4>Operands: <a class=headline-hash href=#operands-4>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>inputs</code></td><td>variadic of ranked tensor of any type values</td></tr></tbody></table><h4 id=results-4>Results: <a class=headline-hash href=#results-4>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>ranked tensor of any type values</td></tr></tbody></table><h3 id=sparse_tensorconvert-sparse_tensorconvertop><code>sparse_tensor.convert</code> (sparse_tensor::ConvertOp) <a class=headline-hash href=#sparse_tensorconvert-sparse_tensorconvertop>¶</a></h3><p><em>Converts between different tensor types</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.convert` $source attr-dict `:` type($source) `to` type($dest) </code></pre><p>Converts one sparse or dense tensor type to another tensor type. The rank of the source and destination types must match exactly, and the dimension sizes must either match exactly or relax from a static to a dynamic size. The sparse encoding of the two types can obviously be completely different. The name <code>convert</code> was preferred over <code>cast</code>, since the operation may incur a non-trivial cost.</p><p>When converting between two different sparse tensor types, only explicitly stored values are moved from one underlying sparse storage format to the other. When converting from an unannotated dense tensor type to a sparse tensor type, an explicit test for nonzero values is used. When converting to an unannotated dense tensor type, implicit zeroes in the sparse storage format are made explicit. Note that the conversions can have non-trivial costs associated with them, since they may involve elaborate data structure transformations. Also, conversions from sparse tensor types into dense tensor types may be infeasible in terms of storage requirements.</p><p>Trivial dense-to-dense convert will be removed by canonicalization while trivial sparse-to-sparse convert will be removed by the sparse codegen. This is because we use trivial sparse-to-sparse convert to tell bufferization that the sparse codegen will expand the tensor buffer into sparse tensor storage.</p><p>Examples:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%0</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>convert <span class=nv>%a</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>32x32x</span><span class=k>f32</span><span class=p>></span> to <span class=kt>tensor</span><span class=p><</span><span class=m>32x32x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%1</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>convert <span class=nv>%a</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>32x32x</span><span class=k>f32</span><span class=p>></span> to <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%2</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>convert <span class=nv>%b</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>8x8x</span><span class=k>i32</span><span class=p>,</span> <span class=nv>#CSC</span><span class=p>></span> to <span class=kt>tensor</span><span class=p><</span><span class=m>8x8x</span><span class=k>i32</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%3</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>convert <span class=nv>%c</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>4x8x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> to <span class=kt>tensor</span><span class=p><</span><span class=m>4x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSC</span><span class=p>></span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// The following conversion is not allowed (since it would require a </span></span></span><span class=line><span class=cl><span class=c>// runtime assertion that the source's dimension size is actually 100). </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>%4</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>convert <span class=nv>%d</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>></span> to <span class=kt>tensor</span><span class=p><</span><span class=m>100x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SV</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code>, <code>StageWithSortSparseOpInterface</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=operands-5>Operands: <a class=headline-hash href=#operands-5>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>source</code></td><td>ranked tensor of any type values</td></tr></tbody></table><h4 id=results-5>Results: <a class=headline-hash href=#results-5>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>dest</code></td><td>ranked tensor of any type values</td></tr></tbody></table><h3 id=sparse_tensorcoordinates-sparse_tensortocoordinatesop><code>sparse_tensor.coordinates</code> (sparse_tensor::ToCoordinatesOp) <a class=headline-hash href=#sparse_tensorcoordinates-sparse_tensortocoordinatesop>¶</a></h3><p><em>Extracts the <code>level</code>-th coordinates array of the <code>tensor</code></em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.coordinates` $tensor attr-dict `:` type($tensor) `to` type($result) </code></pre><p>Returns the coordinates array of the tensor’s storage at the given level. This is similar to the <code>bufferization.to_memref</code> operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the <code>bufferization.to_memref</code> operation, however, this sparse operation actually lowers into code that extracts the coordinates array from the sparse storage itself (either by calling a support library or through direct code).</p><p>Writing into the result of this operation is undefined behavior.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%1</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>coordinates <span class=nv>%0</span> <span class=p>{</span> <span class=nl>level =</span> <span class=m>1</span> <span class=p>:</span> <span class=k>index</span> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>64x64x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> to <span class=kt>memref</span><span class=p><</span><span class=m>?x</span><span class=k>index</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>InferTypeOpInterface</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=attributes-3>Attributes: <a class=headline-hash href=#attributes-3>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>level</code></td><td>::mlir::IntegerAttr</td><td>level attribute</td></tr></table><h4 id=operands-6>Operands: <a class=headline-hash href=#operands-6>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tensor</code></td><td>sparse tensor of any type values</td></tr></tbody></table><h4 id=results-6>Results: <a class=headline-hash href=#results-6>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>non-0-ranked.memref of any type values</td></tr></tbody></table><h3 id=sparse_tensorcoordinates_buffer-sparse_tensortocoordinatesbufferop><code>sparse_tensor.coordinates_buffer</code> (sparse_tensor::ToCoordinatesBufferOp) <a class=headline-hash href=#sparse_tensorcoordinates_buffer-sparse_tensortocoordinatesbufferop>¶</a></h3><p><em>Extracts the linear coordinates array from a tensor</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.coordinates_buffer` $tensor attr-dict `:` type($tensor) `to` type($result) </code></pre><p>Returns the linear coordinates array for a sparse tensor with a trailing COO region with at least two levels. It is an error if the tensor doesn’t contain such a COO region. This is similar to the <code>bufferization.to_memref</code> operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the <code>bufferization.to_memref</code> operation, however, this operation actually lowers into code that extracts the linear coordinates array from the sparse storage scheme that stores the coordinates for the COO region as an array of structures. For example, a 2D COO sparse tensor with two non-zero elements at coordinates (1, 3) and (4, 6) are stored in a linear buffer as (1, 4, 3, 6) instead of two buffer as (1, 4) and (3, 6).</p><p>Writing into the result of this operation is undefined behavior.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%1</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>coordinates_buffer <span class=nv>%0</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>64x64x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#COO</span><span class=p>></span> to <span class=kt>memref</span><span class=p><</span><span class=m>?x</span><span class=k>index</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>InferTypeOpInterface</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=operands-7>Operands: <a class=headline-hash href=#operands-7>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tensor</code></td><td>sparse tensor of any type values</td></tr></tbody></table><h4 id=results-7>Results: <a class=headline-hash href=#results-7>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>non-0-ranked.memref of any type values</td></tr></tbody></table><h3 id=sparse_tensorcrd_translate-sparse_tensorcrdtranslateop><code>sparse_tensor.crd_translate</code> (sparse_tensor::CrdTranslateOp) <a class=headline-hash href=#sparse_tensorcrd_translate-sparse_tensorcrdtranslateop>¶</a></h3><p><em>Performs coordinate translation between level and dimension coordinate space.</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.crd_translate` $direction `[` $in_crds `]` `as` $encoder attr-dict `:` type($out_crds) </code></pre><p>Performs coordinate translation between level and dimension coordinate space according to the affine maps defined by $encoder.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%l0</span><span class=p>,</span> <span class=nv>%l1</span><span class=p>,</span> <span class=nv>%l2</span><span class=p>,</span> <span class=nv>%l3</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>crd_translate dim_to_lvl <span class=p>[</span><span class=nv>%d0</span><span class=p>,</span> <span class=nv>%d1</span><span class=p>]</span> as <span class=nv>#BSR</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=k>index</span><span class=p>,</span> <span class=k>index</span><span class=p>,</span> <span class=k>index</span><span class=p>,</span> <span class=k>index</span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=attributes-4>Attributes: <a class=headline-hash href=#attributes-4>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>direction</code></td><td>::mlir::sparse_tensor::CrdTransDirectionKindAttr</td><td><details><summary>sparse tensor coordinate translation direction</summary><p>Enum cases:</p><ul><li>dim_to_lvl (<code>dim2lvl</code>)</li><li>lvl_to_dim (<code>lvl2dim</code>)</li></ul></details></td></tr><tr><td><code>encoder</code></td><td>::mlir::sparse_tensor::SparseTensorEncodingAttr</td><td><details><summary></summary><pre><code>An attribute to encode information on sparsity properties of tensors, inspired by the TACO formalization of sparse tensors. This encoding is eventually used by a **sparsifier** pass to generate sparse code fully automatically from a sparsity-agnostic representation of the computation, i.e., an implicit sparse representation is converted to an explicit sparse representation where co-iterating loops operate on sparse storage formats rather than tensors with a sparsity encoding. Compiler passes that run before this sparsifier pass need to be aware of the semantics of tensor types with such a sparsity encoding. <p>In this encoding, we use <strong>dimension</strong> to refer to the axes of the semantic tensor, and <strong>level</strong> to refer to the axes of the actual storage format, i.e., the operational representation of the sparse tensor in memory. The number of dimensions is usually the same as the number of levels (such as CSR storage format). However, the encoding can also map dimensions to higher-order levels (for example, to encode a block-sparse BSR storage format) or to lower-order levels (for example, to linearize dimensions as a single level in the storage).</p> <p>The encoding contains a map that provides the following:</p> <ul> <li>An ordered sequence of dimension specifications, each of which defines: <ul> <li>the dimension-size (implicit from the tensor’s dimension-shape)</li> <li>a <strong>dimension-expression</strong></li> </ul> </li> <li>An ordered sequence of level specifications, each of which includes a required <strong>level-type</strong>, which defines how the level should be stored. Each level-type consists of: <ul> <li>a <strong>level-expression</strong>, which defines what is stored</li> <li>a <strong>level-format</strong></li> <li>a collection of <strong>level-properties</strong> that apply to the level-format</li> </ul> </li> </ul> <p>Each level-expression is an affine expression over dimension-variables. Thus, the level-expressions collectively define an affine map from dimension-coordinates to level-coordinates. The dimension-expressions collectively define the inverse map, which only needs to be provided for elaborate cases where it cannot be inferred automatically.</p> <p>Each dimension could also have an optional <code>SparseTensorDimSliceAttr</code>. Within the sparse storage format, we refer to indices that are stored explicitly as <strong>coordinates</strong> and offsets into the storage format as <strong>positions</strong>.</p> <p>The supported level-formats are the following:</p> <ul> <li><strong>dense</strong> : all entries along this level are stored and linearized.</li> <li><strong>batch</strong> : all entries along this level are stored but not linearized.</li> <li><strong>compressed</strong> : only nonzeros along this level are stored</li> <li><strong>loose_compressed</strong> : as compressed, but allows for free space between regions</li> <li><strong>singleton</strong> : a variant of the compressed format, where coordinates have no siblings</li> <li><strong>structured[n, m]</strong> : the compression uses a n:m encoding (viz. n out of m consecutive elements are nonzero)</li> </ul> <p>For a compressed level, each position interval is represented in a compact way with a lowerbound <code>pos(i)</code> and an upperbound <code>pos(i+1) - 1</code>, which implies that successive intervals must appear in order without any "holes" in between them. The loose compressed format relaxes these constraints by representing each position interval with a lowerbound <code>lo(i)</code> and an upperbound <code>hi(i)</code>, which allows intervals to appear in arbitrary order and with elbow room between them.</p> <p>By default, each level-type has the property of being unique (no duplicate coordinates at that level) and ordered (coordinates appear sorted at that level). For singleton levels, the coordinates are fused with its parents in AoS (array of structures) scheme. The following properties can be added to a level-format to change this default behavior:</p> <ul> <li><strong>nonunique</strong> : duplicate coordinates may appear at the level</li> <li><strong>nonordered</strong> : coordinates may appear in arbribratry order</li> <li><strong>soa</strong> : only applicable to singleton levels, fuses the singleton level in SoA (structure of arrays) scheme.</li> </ul> <p>In addition to the map, the following fields are optional:</p> <ul> <li> <p>The required bitwidth for position storage (integral offsets into the sparse storage scheme). A narrow width reduces the memory footprint of overhead storage, as long as the width suffices to define the total required range (viz. the maximum number of stored entries over all indirection levels). The choices are <code>8</code>, <code>16</code>, <code>32</code>, <code>64</code>, or, the default, <code>0</code> to indicate the native bitwidth.</p> </li> <li> <p>The required bitwidth for coordinate storage (the coordinates of stored entries). A narrow width reduces the memory footprint of overhead storage, as long as the width suffices to define the total required range (viz. the maximum value of each tensor coordinate over all levels). The choices are <code>8</code>, <code>16</code>, <code>32</code>, <code>64</code>, or, the default, <code>0</code> to indicate a native bitwidth.</p> </li> <li> <p>The explicit value for the sparse tensor. If explicitVal is set, then all the non-zero values in the tensor have the same explicit value. The default value Attribute() indicates that it is not set. This is useful for binary-valued sparse tensors whose values can either be an implicit value (0 by default) or an explicit value (such as 1). In this approach, we don’t store explicit/implicit values, and metadata (such as position and coordinate arrays) alone fully defines the original tensor. This yields additional savings for the storage requirements, as well as for the computational time, since we skip operating on implicit values and can constant fold the explicit values where they are used.</p> </li> <li> <p>The implicit value for the sparse tensor. If implicitVal is set, then the "zero" value in the tensor is equal to the implicit value. For now, we only support <code>0</code> as the implicit value but it could be extended in the future. The default value Attribute() indicates that the implicit value is <code>0</code> (same type as the tensor element type).</p> </li> </ul> <p>Examples:</p> <div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=c>// Sparse vector. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#SparseVector</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=err>&</span>lt<span class=err>;</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>)</span> <span class=err>-&</span>gt<span class=err>;</span> <span class=p>(</span>i <span class=p>:</span> compressed<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}</span><span class=err>&</span>gt<span class=err>;</span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=err>&</span>lt<span class=err>;</span><span class=m>?x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=err>&</span>gt<span class=err>;</span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Sorted coordinate scheme (arranged in AoS format by default). </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#SortedCOO</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=err>&</span>lt<span class=err>;</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>,</span> j<span class=p>)</span> <span class=err>-&</span>gt<span class=err>;</span> <span class=p>(</span>i <span class=p>:</span> compressed<span class=p>(</span>nonunique<span class=p>),</span> j <span class=p>:</span> singleton<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}</span><span class=err>&</span>gt<span class=err>;</span> </span></span><span class=line><span class=cl><span class=c>// coordinates = {x_crd, y_crd}[nnz] </span></span></span><span class=line><span class=cl><span class=c></span><span class=p>...</span> <span class=kt>tensor</span><span class=err>&</span>lt<span class=err>;</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SortedCOO</span><span class=err>&</span>gt<span class=err>;</span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Sorted coordinate scheme (arranged in SoA format). </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#SortedCOO</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=err>&</span>lt<span class=err>;</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>,</span> j<span class=p>)</span> <span class=err>-&</span>gt<span class=err>;</span> <span class=p>(</span>i <span class=p>:</span> compressed<span class=p>(</span>nonunique<span class=p>),</span> j <span class=p>:</span> singleton<span class=p>(</span>soa<span class=p>))</span> </span></span><span class=line><span class=cl><span class=p>}</span><span class=err>&</span>gt<span class=err>;</span> </span></span><span class=line><span class=cl><span class=c>// coordinates = {x_crd[nnz], y_crd[nnz]} </span></span></span><span class=line><span class=cl><span class=c></span><span class=p>...</span> <span class=kt>tensor</span><span class=err>&</span>lt<span class=err>;</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SortedCOO</span><span class=err>&</span>gt<span class=err>;</span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Batched sorted coordinate scheme, with high encoding. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#BCOO</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=err>&</span>lt<span class=err>;</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>,</span> j<span class=p>,</span> k<span class=p>)</span> <span class=err>-&</span>gt<span class=err>;</span> <span class=p>(</span>i <span class=p>:</span> dense<span class=p>,</span> j <span class=p>:</span> compressed<span class=p>(</span>nonunique<span class=p>,</span> high<span class=p>),</span> k <span class=p>:</span> singleton<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}</span><span class=err>&</span>gt<span class=err>;</span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=err>&</span>lt<span class=err>;</span><span class=m>10x10x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#BCOO</span><span class=err>&</span>gt<span class=err>;</span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Compressed sparse row. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#CSR</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=err>&</span>lt<span class=err>;</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>,</span> j<span class=p>)</span> <span class=err>-&</span>gt<span class=err>;</span> <span class=p>(</span>i <span class=p>:</span> dense<span class=p>,</span> j <span class=p>:</span> compressed<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}</span><span class=err>&</span>gt<span class=err>;</span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=err>&</span>lt<span class=err>;</span><span class=m>100x100x</span><span class=k>bf16</span><span class=p>,</span> <span class=nv>#CSR</span><span class=err>&</span>gt<span class=err>;</span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Doubly compressed sparse column storage with specific bitwidths. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#DCSC</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=err>&</span>lt<span class=err>;</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>,</span> j<span class=p>)</span> <span class=err>-&</span>gt<span class=err>;</span> <span class=p>(</span>j <span class=p>:</span> compressed<span class=p>,</span> i <span class=p>:</span> compressed<span class=p>),</span> </span></span><span class=line><span class=cl> <span class=nl>posWidth =</span> <span class=m>32</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=nl>crdWidth =</span> <span class=m>8</span> </span></span><span class=line><span class=cl><span class=p>}</span><span class=err>&</span>gt<span class=err>;</span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=err>&</span>lt<span class=err>;</span><span class=m>8x8x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#DCSC</span><span class=err>&</span>gt<span class=err>;</span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Doubly compressed sparse column storage with specific </span></span></span><span class=line><span class=cl><span class=c>// explicit and implicit values. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#DCSC</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=err>&</span>lt<span class=err>;</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>,</span> j<span class=p>)</span> <span class=err>-&</span>gt<span class=err>;</span> <span class=p>(</span>j <span class=p>:</span> compressed<span class=p>,</span> i <span class=p>:</span> compressed<span class=p>),</span> </span></span><span class=line><span class=cl> <span class=nl>explicitVal =</span> <span class=m>1</span> <span class=p>:</span> <span class=k>i64</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=nl>implicitVal =</span> <span class=m>0</span> <span class=p>:</span> <span class=k>i64</span> </span></span><span class=line><span class=cl><span class=p>}</span><span class=err>&</span>gt<span class=err>;</span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=err>&</span>lt<span class=err>;</span><span class=m>8x8x</span><span class=k>i64</span><span class=p>,</span> <span class=nv>#DCSC</span><span class=err>&</span>gt<span class=err>;</span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Block sparse row storage (2x3 blocks). </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#BSR</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=err>&</span>lt<span class=err>;</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span> i<span class=p>,</span> j <span class=p>)</span> <span class=err>-&</span>gt<span class=err>;</span> </span></span><span class=line><span class=cl> <span class=p>(</span> i floordiv <span class=m>2</span> <span class=p>:</span> dense<span class=p>,</span> </span></span><span class=line><span class=cl> j floordiv <span class=m>3</span> <span class=p>:</span> compressed<span class=p>,</span> </span></span><span class=line><span class=cl> i mod <span class=m>2</span> <span class=p>:</span> dense<span class=p>,</span> </span></span><span class=line><span class=cl> j mod <span class=m>3</span> <span class=p>:</span> dense </span></span><span class=line><span class=cl> <span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}</span><span class=err>&</span>gt<span class=err>;</span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=err>&</span>lt<span class=err>;</span><span class=m>20x30x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#BSR</span><span class=err>&</span>gt<span class=err>;</span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Same block sparse row storage (2x3 blocks) but this time </span></span></span><span class=line><span class=cl><span class=c>// also with a redundant reverse mapping, which can be inferred. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#BSR_explicit</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=err>&</span>lt<span class=err>;</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>{</span> ib<span class=p>,</span> jb<span class=p>,</span> ii<span class=p>,</span> jj <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=p>(</span> <span class=nl>i =</span> ib <span class=p>*</span> <span class=m>2</span> <span class=err>+</span> ii<span class=p>,</span> </span></span><span class=line><span class=cl> <span class=nl>j =</span> jb <span class=p>*</span> <span class=m>3</span> <span class=err>+</span> jj<span class=p>)</span> <span class=err>-&</span>gt<span class=err>;</span> </span></span><span class=line><span class=cl> <span class=p>(</span> <span class=nl>ib =</span> i floordiv <span class=m>2</span> <span class=p>:</span> dense<span class=p>,</span> </span></span><span class=line><span class=cl> <span class=nl>jb =</span> j floordiv <span class=m>3</span> <span class=p>:</span> compressed<span class=p>,</span> </span></span><span class=line><span class=cl> <span class=nl>ii =</span> i mod <span class=m>2</span> <span class=p>:</span> dense<span class=p>,</span> </span></span><span class=line><span class=cl> <span class=nl>jj =</span> j mod <span class=m>3</span> <span class=p>:</span> dense<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}</span><span class=err>&</span>gt<span class=err>;</span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=err>&</span>lt<span class=err>;</span><span class=m>20x30x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#BSR_explicit</span><span class=err>&</span>gt<span class=err>;</span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// ELL format. </span></span></span><span class=line><span class=cl><span class=c>// In the simple format for matrix, one array stores values and another </span></span></span><span class=line><span class=cl><span class=c>// array stores column indices. The arrays have the same number of rows </span></span></span><span class=line><span class=cl><span class=c>// as the original matrix, but only have as many columns as </span></span></span><span class=line><span class=cl><span class=c>// the maximum number of nonzeros on a row of the original matrix. </span></span></span><span class=line><span class=cl><span class=c>// There are many variants for ELL such as jagged diagonal scheme. </span></span></span><span class=line><span class=cl><span class=c>// To implement ELL, map provides a notion of &quot;counting a </span></span></span><span class=line><span class=cl><span class=c>// dimension&quot;, where every stored element with the same coordinate </span></span></span><span class=line><span class=cl><span class=c>// is mapped to a new slice. For instance, ELL storage of a 2-d </span></span></span><span class=line><span class=cl><span class=c>// tensor can be defined with the mapping (i, j) -&gt; (#i, i, j) </span></span></span><span class=line><span class=cl><span class=c>// using the notation of [Chou20]. Lacking the # symbol in MLIR's </span></span></span><span class=line><span class=cl><span class=c>// affine mapping, we use a free symbol c to define such counting, </span></span></span><span class=line><span class=cl><span class=c>// together with a constant that denotes the number of resulting </span></span></span><span class=line><span class=cl><span class=c>// slices. For example, the mapping [c](i, j) -&gt; (c * 3 * i, i, j) </span></span></span><span class=line><span class=cl><span class=c>// with the level-types [&quot;dense&quot;, &quot;dense&quot;, &quot;compressed&quot;] denotes ELL </span></span></span><span class=line><span class=cl><span class=c>// storage with three jagged diagonals that count the dimension i. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#ELL</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=err>&</span>lt<span class=err>;</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>[</span>c<span class=p>](</span>i<span class=p>,</span> j<span class=p>)</span> <span class=err>-&</span>gt<span class=err>;</span> <span class=p>(</span>c <span class=p>*</span> <span class=m>3</span> <span class=p>*</span> i <span class=p>:</span> dense<span class=p>,</span> i <span class=p>:</span> dense<span class=p>,</span> j <span class=p>:</span> compressed<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}</span><span class=err>&</span>gt<span class=err>;</span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=err>&</span>lt<span class=err>;</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#ELL</span><span class=err>&</span>gt<span class=err>;</span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// CSR slice (offset = 0, size = 4, stride = 1 on the first dimension; </span></span></span><span class=line><span class=cl><span class=c>// offset = 0, size = 8, and a dynamic stride on the second dimension). </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#CSR_SLICE</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=err>&</span>lt<span class=err>;</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i <span class=p>:</span> <span class=nv>#sparse_tensor</span><span class=err>&</span>lt<span class=err>;</span>slice<span class=p>(</span><span class=m>0</span><span class=p>,</span> <span class=m>4</span><span class=p>,</span> <span class=m>1</span><span class=p>)</span><span class=err>&</span>gt<span class=err>;</span><span class=p>,</span> </span></span><span class=line><span class=cl> j <span class=p>:</span> <span class=nv>#sparse_tensor</span><span class=err>&</span>lt<span class=err>;</span>slice<span class=p>(</span><span class=m>0</span><span class=p>,</span> <span class=m>8</span><span class=p>,</span> <span class=err>?</span><span class=p>)</span><span class=err>&</span>gt<span class=err>;</span><span class=p>)</span> <span class=err>-&</span>gt<span class=err>;</span> </span></span><span class=line><span class=cl> <span class=p>(</span>i <span class=p>:</span> dense<span class=p>,</span> j <span class=p>:</span> compressed<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}</span><span class=err>&</span>gt<span class=err>;</span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=err>&</span>lt<span class=err>;</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR_SLICE</span><span class=err>&</span>gt<span class=err>;</span> <span class=p>...</span> </span></span></code></pre></div><p></code></pre></p></details></td></tr></table><h4 id=operands-8>Operands: <a class=headline-hash href=#operands-8>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>in_crds</code></td><td>variadic of index</td></tr></tbody></table><h4 id=results-8>Results: <a class=headline-hash href=#results-8>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>out_crds</code></td><td>variadic of index</td></tr></tbody></table><h3 id=sparse_tensordisassemble-sparse_tensordisassembleop><code>sparse_tensor.disassemble</code> (sparse_tensor::DisassembleOp) <a class=headline-hash href=#sparse_tensordisassemble-sparse_tensordisassembleop>¶</a></h3><p><em>Copies the levels and values of the given sparse tensor</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.disassemble` $tensor attr-dict `:` type($tensor)`out_lvls` `(` $out_levels `:` type($out_levels) `)` `out_vals` `(` $out_values `:` type($out_values) `)` `->``(` type($ret_levels) `)` `,` type($ret_values) `,` `(` type($lvl_lens) `)` `,` type($val_len) </code></pre><p>The disassemble operation is the inverse of <code>sparse_tensor::assemble</code>. It copies the per-level position and coordinate arrays together with the values array of the given sparse tensor into the user-supplied buffers along with the actual length of the memory used in each returned buffer.</p><p>This operation can be used for returning a disassembled MLIR sparse tensor; e.g., copying the sparse tensor contents into pre-allocated numpy arrays back to Python. It is the user’s responsibility to allocate large enough buffers of the appropriate types to hold the sparse tensor contents. The sparsifier simply copies all fields of the sparse tensor into the user-supplied buffers without any sanity test to verify data integrity.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=c>// input COO format |1.1, 0.0, 0.0, 0.0| </span></span></span><span class=line><span class=cl><span class=c>// of 3x4 matrix |0.0, 0.0, 2.2, 3.3| </span></span></span><span class=line><span class=cl><span class=c>// |0.0, 0.0, 0.0, 0.0| </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>%p</span><span class=p>,</span> <span class=nv>%c</span><span class=p>,</span> <span class=nv>%v</span><span class=p>,</span> <span class=nv>%p_len</span><span class=p>,</span> <span class=nv>%c_len</span><span class=p>,</span> <span class=nv>%v_len</span> <span class=p>=</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>disassemble <span class=nv>%s</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>3x4x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#COO</span><span class=p>></span> </span></span><span class=line><span class=cl> out_lvls<span class=p>(</span><span class=nv>%op</span><span class=p>,</span> <span class=nv>%oi</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>2x</span><span class=k>index</span><span class=p>>,</span> <span class=kt>tensor</span><span class=p><</span><span class=m>3x2x</span><span class=k>index</span><span class=p>>)</span> </span></span><span class=line><span class=cl> out_vals<span class=p>(</span><span class=nv>%od</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>3x</span><span class=k>f64</span><span class=p>>)</span> <span class=p>-></span> </span></span><span class=line><span class=cl> <span class=p>(</span><span class=kt>tensor</span><span class=p><</span><span class=m>2x</span><span class=k>index</span><span class=p>>,</span> <span class=kt>tensor</span><span class=p><</span><span class=m>3x2x</span><span class=k>index</span><span class=p>>),</span> <span class=kt>tensor</span><span class=p><</span><span class=m>3x</span><span class=k>f64</span><span class=p>>,</span> <span class=p>(</span><span class=k>index</span><span class=p>,</span> <span class=k>index</span><span class=p>),</span> <span class=k>index</span> </span></span><span class=line><span class=cl><span class=c>// %p = arith.constant dense<[ 0, 3 ]> : tensor<2xindex> </span></span></span><span class=line><span class=cl><span class=c>// %c = arith.constant dense<[[0,0], [1,2], [1,3]]> : tensor<3x2xindex> </span></span></span><span class=line><span class=cl><span class=c>// %v = arith.constant dense<[ 1.1, 2.2, 3.3 ]> : tensor<3xf64> </span></span></span><span class=line><span class=cl><span class=c>// %p_len = 2 </span></span></span><span class=line><span class=cl><span class=c>// %c_len = 6 (3x2) </span></span></span><span class=line><span class=cl><span class=c>// %v_len = 3 </span></span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code>, <code>SameVariadicResultSize</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=operands-9>Operands: <a class=headline-hash href=#operands-9>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tensor</code></td><td>sparse tensor of any type values</td></tr><tr><td style=text-align:center><code>out_levels</code></td><td>variadic of ranked tensor of signless integer or index values</td></tr><tr><td style=text-align:center><code>out_values</code></td><td>ranked tensor of any type values</td></tr></tbody></table><h4 id=results-9>Results: <a class=headline-hash href=#results-9>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>ret_levels</code></td><td>variadic of ranked tensor of signless integer or index values</td></tr><tr><td style=text-align:center><code>ret_values</code></td><td>ranked tensor of any type values</td></tr><tr><td style=text-align:center><code>lvl_lens</code></td><td>variadic of scalar like</td></tr><tr><td style=text-align:center><code>val_len</code></td><td>scalar like</td></tr></tbody></table><h3 id=sparse_tensorexpand-sparse_tensorexpandop><code>sparse_tensor.expand</code> (sparse_tensor::ExpandOp) <a class=headline-hash href=#sparse_tensorexpand-sparse_tensorexpandop>¶</a></h3><p><em>Expands an access pattern for insertion</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.expand` $tensor attr-dict `:` type($tensor) `to` type($values) `,` type($filled) `,` type($added) </code></pre><p>Performs an access pattern expansion for the innermost levels of the given tensor. This operation is useful to implement kernels in which a sparse tensor appears as output. This technique is known under several different names and using several alternative implementations, for example, phase counter [Gustavson72], expanded or switch array [Pissanetzky84], in phase scan [Duff90], access pattern expansion [Bik96], and workspaces [Kjolstad19].</p><p>The <code>values</code> and <code>filled</code> arrays must have lengths equal to the level-size of the innermost level (i.e., as if the innermost level were <em>dense</em>). The <code>added</code> array and <code>count</code> are used to store new level-coordinates when a false value is encountered in the <code>filled</code> array. All arrays should be allocated before the loop (possibly even shared between loops in a future optimization) so that their <em>dense</em> initialization can be amortized over many iterations. Setting and resetting the dense arrays in the loop nest itself is kept <em>sparse</em> by only iterating over set elements through an indirection using the added array, so that the operations are kept proportional to the number of nonzeros.</p><p>Note that this operation is “impure” in the sense that even though the results are modeled through SSA values, the operation relies on a proper side-effecting context that sets and resets the expanded arrays.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%values</span><span class=p>,</span> <span class=nv>%filled</span><span class=p>,</span> <span class=nv>%added</span><span class=p>,</span> <span class=nv>%count</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>expand <span class=nv>%tensor</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>4x4x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> to <span class=kt>memref</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>>,</span> <span class=kt>memref</span><span class=p><</span><span class=m>?x</span><span class=k>i1</span><span class=p>>,</span> <span class=kt>memref</span><span class=p><</span><span class=m>?x</span><span class=k>index</span><span class=p>></span> </span></span></code></pre></div><h4 id=operands-10>Operands: <a class=headline-hash href=#operands-10>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tensor</code></td><td>sparse tensor of any type values</td></tr></tbody></table><h4 id=results-10>Results: <a class=headline-hash href=#results-10>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>values</code></td><td>strided memref of any type values of rank 1</td></tr><tr><td style=text-align:center><code>filled</code></td><td>1D memref of 1-bit signless integer values</td></tr><tr><td style=text-align:center><code>added</code></td><td>1D memref of index values</td></tr><tr><td style=text-align:center><code>count</code></td><td>index</td></tr></tbody></table><h3 id=sparse_tensorextract_iteration_space-sparse_tensorextractiterspaceop><code>sparse_tensor.extract_iteration_space</code> (sparse_tensor::ExtractIterSpaceOp) <a class=headline-hash href=#sparse_tensorextract_iteration_space-sparse_tensorextractiterspaceop>¶</a></h3><p><em>Extracts an iteration space from a sparse tensor between certain levels</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.extract_iteration_space` $tensor (`at` $parentIter^)? `lvls` `=` custom<LevelRange>($loLvl, $hiLvl) attr-dict `:` type($tensor) (`,` type($parentIter)^)? `->` qualified(type($extractedSpace)) </code></pre><p>Extracts a <code>!sparse_tensor.iter_space</code> from a sparse tensor between certain (consecutive) levels. For sparse levels, it is usually done by loading a postion range from the underlying sparse tensor storage. E.g., for a compressed level, the iteration space is extracted by [pos[i], pos[i+1]) supposing the the parent iterator points at <code>i</code>.</p><p><code>tensor</code>: the input sparse tensor that defines the iteration space. <code>parentIter</code>: the iterator for the previous level, at which the iteration space at the current levels will be extracted. <code>loLvl</code>, <code>hiLvl</code>: the level range between [loLvl, hiLvl) in the input tensor that the returned iteration space covers. <code>hiLvl - loLvl</code> defines the dimension of the iteration space.</p><p>The type of returned the value is must be <code>!sparse_tensor.iter_space<#INPUT_ENCODING, lvls = $loLvl to $hiLvl></code>. The returned iteration space can then be iterated over by <code>sparse_tensor.iterate</code> operations to visit every stored element (usually nonzeros) in the input sparse tensor.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=c>// Extracts a 1-D iteration space from a COO tensor at level 1. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>%space</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>iteration<span class=p>.</span>extract_space <span class=nv>%sp</span> at <span class=nv>%it1</span> <span class=nl>lvls =</span> <span class=m>1</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>4x8x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#COO</span><span class=p>>,</span> <span class=p>!</span>sparse_tensor<span class=p>.</span>iterator<span class=p><</span><span class=nv>#COO</span><span class=p>,</span> <span class=nl>lvls =</span> <span class=m>0</span><span class=p>></span> </span></span><span class=line><span class=cl> <span class=p>->!</span>sparse_tensor<span class=p>.</span>iter_space<span class=p><</span><span class=nv>#COO</span><span class=p>,</span> <span class=nl>lvls =</span> <span class=m>1</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>InferTypeOpInterface</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=attributes-5>Attributes: <a class=headline-hash href=#attributes-5>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>loLvl</code></td><td>::mlir::IntegerAttr</td><td>level attribute</td></tr><tr><td><code>hiLvl</code></td><td>::mlir::IntegerAttr</td><td>level attribute</td></tr></table><h4 id=operands-11>Operands: <a class=headline-hash href=#operands-11>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tensor</code></td><td>sparse tensor of any type values</td></tr><tr><td style=text-align:center><code>parentIter</code></td><td>sparse iterator</td></tr></tbody></table><h4 id=results-11>Results: <a class=headline-hash href=#results-11>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>extractedSpace</code></td><td>sparse iteration space</td></tr></tbody></table><h3 id=sparse_tensorextract_value-sparse_tensorextractvalop><code>sparse_tensor.extract_value</code> (sparse_tensor::ExtractValOp) <a class=headline-hash href=#sparse_tensorextract_value-sparse_tensorextractvalop>¶</a></h3><p><em>Extracts a value from a sparse tensor using an iterator.</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.extract_value` $tensor `at` $iterator attr-dict `:` type($tensor)`,` qualified(type($iterator)) </code></pre><p>The <code>sparse_tensor.extract_value</code> operation extracts the value pointed to by a sparse iterator from a sparse tensor.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%val</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>extract_value <span class=nv>%sp</span> at <span class=nv>%it</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>>,</span> <span class=p>!</span>sparse_tensor<span class=p>.</span>iterator<span class=p><</span><span class=nv>#CSR</span><span class=p>,</span> <span class=nl>lvl =</span> <span class=m>1</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>InferTypeOpInterface</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=operands-12>Operands: <a class=headline-hash href=#operands-12>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tensor</code></td><td>sparse tensor of any type values</td></tr><tr><td style=text-align:center><code>iterator</code></td><td>sparse iterator</td></tr></tbody></table><h4 id=results-12>Results: <a class=headline-hash href=#results-12>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>any type</td></tr></tbody></table><h3 id=sparse_tensorforeach-sparse_tensorforeachop><code>sparse_tensor.foreach</code> (sparse_tensor::ForeachOp) <a class=headline-hash href=#sparse_tensorforeach-sparse_tensorforeachop>¶</a></h3><p><em>Iterates over elements in a tensor</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.foreach` `in` $tensor (`init``(`$initArgs^`)`)? attr-dict `:` type($tensor) (`,` type($initArgs)^)? (`->` type($results)^)? `do` $region </code></pre><p>Iterates over stored elements in a tensor (which are typically, but not always, non-zero for sparse tensors) and executes the block.</p><p><code>tensor</code>: the input tensor to iterate over. <code>initArgs</code>: the initial loop argument to carry and update during each iteration. <code>order</code>: an optional permutation affine map that specifies the order in which the dimensions are visited (e.g., row first or column first). This is only applicable when the input tensor is a non-annotated dense tensor.</p><p>For an input tensor with dim-rank <code>n</code>, the block must take <code>n + 1</code> arguments (plus additional loop-carried variables as described below). The first <code>n</code> arguments provide the dimension-coordinates of the element being visited, and must all have <code>index</code> type. The <code>(n+1)</code>-th argument provides the element’s value, and must have the tensor’s element type.</p><p><code>sparse_tensor.foreach</code> can also operate on loop-carried variables and returns the final values after loop termination. The initial values of the variables are passed as additional SSA operands to the “sparse_tensor.foreach” following the n + 1 SSA values mentioned above (n coordinates and 1 value).</p><p>The region must terminate with a “sparse_tensor.yield” that passes the current values of all loop-carried variables to the next iteration, or to the result, if at the last iteration. The number and static types of loop-carried variables may not change with iterations.</p><p>For example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%c0</span> <span class=p>=</span> arith<span class=p>.</span><span class=kt>constant</span> <span class=m>0</span> <span class=p>:</span> <span class=k>i32</span> </span></span><span class=line><span class=cl><span class=nv>%ret</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>foreach in <span class=nv>%0</span> init<span class=p>(</span><span class=nv>%c0</span><span class=p>):</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>i32</span><span class=p>,</span> <span class=nv>#DCSR</span><span class=p>>,</span> <span class=k>i32</span> <span class=p>-></span> <span class=k>i32</span> do <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%arg1</span><span class=p>:</span> <span class=k>index</span><span class=p>,</span> <span class=nv>%arg2</span><span class=p>:</span> <span class=k>index</span><span class=p>,</span> <span class=nv>%arg3</span><span class=p>:</span> <span class=k>i32</span><span class=p>,</span> <span class=nv>%iter</span><span class=p>:</span> <span class=k>i32</span><span class=p>):</span> </span></span><span class=line><span class=cl> <span class=nv>%sum</span> <span class=p>=</span> arith<span class=p>.</span>add <span class=nv>%iter</span><span class=p>,</span> <span class=nv>%arg3</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>yield <span class=nv>%sum</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>It is important to note that the generated loop iterates over elements in their storage order. However, regardless of the storage scheme used by the tensor, the block is always given the dimension-coordinates.</p><p>For example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>#COL_MAJOR</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>d0<span class=p>,</span> d1<span class=p>)</span> <span class=p>-></span> <span class=p>(</span>d1 <span class=p>:</span> compressed<span class=p>,</span> d0 <span class=p>:</span> compressed<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// foreach on a column-major sparse tensor </span></span></span><span class=line><span class=cl><span class=c></span>sparse_tensor<span class=p>.</span>foreach in <span class=nv>%0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#COL_MAJOR</span><span class=p>></span> do <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%row</span><span class=p>:</span> <span class=k>index</span><span class=p>,</span> <span class=nv>%col</span><span class=p>:</span> <span class=k>index</span><span class=p>,</span> <span class=nv>%arg3</span><span class=p>:</span> <span class=k>f64</span><span class=p>):</span> </span></span><span class=line><span class=cl> <span class=c>// [%row, %col] -> [0, 0], [1, 0], [2, 0], [0, 1], [1, 1], [2, 1] </span></span></span><span class=line><span class=cl><span class=c></span><span class=p>}</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=nv>#ROW_MAJOR</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>d0<span class=p>,</span> d1<span class=p>)</span> <span class=p>-></span> <span class=p>(</span>d0 <span class=p>:</span> compressed<span class=p>,</span> d1 <span class=p>:</span> compressed<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// foreach on a row-major sparse tensor </span></span></span><span class=line><span class=cl><span class=c></span>sparse_tensor<span class=p>.</span>foreach in <span class=nv>%0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#ROW_MAJOR</span><span class=p>></span> do <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%row</span><span class=p>:</span> <span class=k>index</span><span class=p>,</span> <span class=nv>%col</span><span class=p>:</span> <span class=k>index</span><span class=p>,</span> <span class=nv>%arg3</span><span class=p>:</span> <span class=k>f64</span><span class=p>):</span> </span></span><span class=line><span class=cl> <span class=c>// [%row, %col] -> [0, 0], [0, 1], [1, 0], [1, 1], [2, 0], [2, 1] </span></span></span><span class=line><span class=cl><span class=c></span><span class=p>}</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// foreach on a row-major dense tensor but visit column first </span></span></span><span class=line><span class=cl><span class=c></span>sparse_tensor<span class=p>.</span>foreach in <span class=nv>%0</span> <span class=p>{</span><span class=nl>order=</span>affine_map<span class=p><(</span>i<span class=p>,</span>j<span class=p>)->(</span>j<span class=p>,</span>i<span class=p>)>}:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>2x3x</span><span class=k>f64</span><span class=p>></span> do <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%row</span><span class=p>:</span> <span class=k>index</span><span class=p>,</span> <span class=nv>%col</span><span class=p>:</span> <span class=k>index</span><span class=p>,</span> <span class=nv>%arg3</span><span class=p>:</span> <span class=k>f64</span><span class=p>):</span> </span></span><span class=line><span class=cl> <span class=c>// [%row, %col] -> [0, 0], [1, 0], [2, 0], [0, 1], [1, 1], [2, 1] </span></span></span><span class=line><span class=cl><span class=c></span><span class=p>}</span> </span></span></code></pre></div><p>Traits: <code>SingleBlockImplicitTerminator<YieldOp></code>, <code>SingleBlock</code></p><h4 id=attributes-6>Attributes: <a class=headline-hash href=#attributes-6>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>order</code></td><td>::mlir::AffineMapAttr</td><td>AffineMap attribute</td></tr></table><h4 id=operands-13>Operands: <a class=headline-hash href=#operands-13>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tensor</code></td><td>ranked tensor of any type values</td></tr><tr><td style=text-align:center><code>initArgs</code></td><td>variadic of any type</td></tr></tbody></table><h4 id=results-13>Results: <a class=headline-hash href=#results-13>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>results</code></td><td>variadic of any type</td></tr></tbody></table><h3 id=sparse_tensorhas_runtime_library-sparse_tensorhasruntimelibraryop><code>sparse_tensor.has_runtime_library</code> (sparse_tensor::HasRuntimeLibraryOp) <a class=headline-hash href=#sparse_tensorhas_runtime_library-sparse_tensorhasruntimelibraryop>¶</a></h3><p><em>Indicates whether running in runtime/codegen mode</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.has_runtime_library` attr-dict </code></pre><p>Returns a boolean value that indicates whether the sparsifier runs in runtime library mode or not. For testing only! This operation is useful for writing test cases that require different code depending on runtime/codegen mode.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%has_runtime</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>has_runtime_library </span></span><span class=line><span class=cl>scf<span class=p>.</span>if <span class=nv>%has_runtime</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=p>...</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>Interfaces: <code>InferTypeOpInterface</code></p><h4 id=results-14>Results: <a class=headline-hash href=#results-14>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>1-bit signless integer</td></tr></tbody></table><h3 id=sparse_tensoriterate-sparse_tensoriterateop><code>sparse_tensor.iterate</code> (sparse_tensor::IterateOp) <a class=headline-hash href=#sparse_tensoriterate-sparse_tensoriterateop>¶</a></h3><p><em>Iterates over a sparse iteration space</em></p><p>The <code>sparse_tensor.iterate</code> operation represents a loop (nest) over the provided iteration space extracted from a specific sparse tensor. The operation defines an SSA value for a sparse iterator that points to the current stored element in the sparse tensor and SSA values for coordinates of the stored element. The coordinates are always converted to <code>index</code> type despite of the underlying sparse tensor storage. When coordinates are not used, the SSA values can be skipped by <code>_</code> symbols, which usually leads to simpler generated code after sparsification. For example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=c>// The coordinate for level 0 is not used when iterating over a 2-D </span></span></span><span class=line><span class=cl><span class=c>// iteration space. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>%sparse_tensor.iterate</span> <span class=nv>%iterator</span> in <span class=nv>%space</span> at<span class=p>(</span>_<span class=p>,</span> <span class=nv>%crd_1</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>!</span>sparse_tensor<span class=p>.</span>iter_space<span class=p><</span><span class=nv>#CSR</span><span class=p>,</span> <span class=nl>lvls =</span> <span class=m>0</span> to <span class=m>2</span><span class=p>></span> </span></span></code></pre></div><p><code>sparse_tensor.iterate</code> can also operate on loop-carried variables. It returns the final values after loop termination. The initial values of the variables are passed as additional SSA operands to the iterator SSA value and used coordinate SSA values mentioned above. The operation region has an argument for the iterator, variadic arguments for specified (used) coordiates and followed by one argument for each loop-carried variable, representing the value of the variable at the current iteration. The body region must contain exactly one block that terminates with <code>sparse_tensor.yield</code>.</p><p>The results of an <code>sparse_tensor.iterate</code> hold the final values after the last iteration. If the <code>sparse_tensor.iterate</code> defines any values, a yield must be explicitly present. The number and types of the <code>sparse_tensor.iterate</code> results must match the initial values in the iter_args binding and the yield operands.</p><p>A nested <code>sparse_tensor.iterate</code> example that prints all the coordinates stored in the sparse input:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=kt>func</span><span class=p>.</span><span class=kt>func</span> <span class=nf>@nested_iterate</span><span class=p>(</span><span class=nv>%sp</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>4x8x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#COO</span><span class=p>>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=c>// Iterates over the first level of %sp </span></span></span><span class=line><span class=cl><span class=c></span> <span class=nv>%l1</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>extract_iteration_space <span class=nv>%sp</span> <span class=nl>lvls =</span> <span class=m>0</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>4x8x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#COO</span><span class=p>></span> <span class=p>-></span> <span class=p>!</span>sparse_tensor<span class=p>.</span>iter_space<span class=p><</span><span class=nv>#COO</span><span class=p>,</span> <span class=nl>lvls =</span> <span class=m>0</span> to <span class=m>1</span><span class=p>></span> </span></span><span class=line><span class=cl> <span class=nv>%r1</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>iterate <span class=nv>%it1</span> in <span class=nv>%l1</span> at <span class=p>(</span><span class=nv>%coord0</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>!</span>sparse_tensor<span class=p>.</span>iter_space<span class=p><</span><span class=nv>#COO</span><span class=p>,</span> <span class=nl>lvls =</span> <span class=m>0</span> to <span class=m>1</span><span class=p>></span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=c>// Iterates over the second level of %sp </span></span></span><span class=line><span class=cl><span class=c></span> <span class=nv>%l2</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>extract_iteration_space <span class=nv>%sp</span> at <span class=nv>%it1</span> <span class=nl>lvls =</span> <span class=m>1</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>4x8x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#COO</span><span class=p>>,</span> <span class=p>!</span>sparse_tensor<span class=p>.</span>iterator<span class=p><</span><span class=nv>#COO</span><span class=p>,</span> <span class=nl>lvls =</span> <span class=m>0</span> to <span class=m>1</span><span class=p>></span> </span></span><span class=line><span class=cl> <span class=p>-></span> <span class=p>!</span>sparse_tensor<span class=p>.</span>iter_space<span class=p><</span><span class=nv>#COO</span><span class=p>,</span> <span class=nl>lvls =</span> <span class=m>1</span> to <span class=m>2</span><span class=p>></span> </span></span><span class=line><span class=cl> <span class=nv>%r2</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>iterate <span class=nv>%it2</span> in <span class=nv>%l2</span> at <span class=p>(</span>coord1<span class=p>)</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>!</span>sparse_tensor<span class=p>.</span>iter_space<span class=p><</span><span class=nv>#COO</span><span class=p>,</span> <span class=nl>lvls =</span> <span class=m>1</span> to <span class=m>2</span><span class=p>></span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=kt>vector</span><span class=p>.</span>print <span class=nv>%coord0</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl> <span class=kt>vector</span><span class=p>.</span>print <span class=nv>%coord1</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>Traits: <code>RecursiveMemoryEffects</code>, <code>RecursivelySpeculatableImplTrait</code>, <code>SingleBlockImplicitTerminator<sparse_tensor::YieldOp></code>, <code>SingleBlock</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>LoopLikeOpInterface</code>, <code>RegionBranchOpInterface</code></p><h4 id=attributes-7>Attributes: <a class=headline-hash href=#attributes-7>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>crdUsedLvls</code></td><td>::mlir::IntegerAttr</td><td>LevelSet attribute</td></tr></table><h4 id=operands-14>Operands: <a class=headline-hash href=#operands-14>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>iterSpace</code></td><td>sparse iteration space</td></tr><tr><td style=text-align:center><code>initArgs</code></td><td>variadic of any type</td></tr></tbody></table><h4 id=results-15>Results: <a class=headline-hash href=#results-15>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>results</code></td><td>variadic of any type</td></tr></tbody></table><h3 id=sparse_tensorload-sparse_tensorloadop><code>sparse_tensor.load</code> (sparse_tensor::LoadOp) <a class=headline-hash href=#sparse_tensorload-sparse_tensorloadop>¶</a></h3><p><em>Rematerializes tensor from underlying sparse storage format</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.load` $tensor (`hasInserts` $hasInserts^)? attr-dict `:` type($tensor) </code></pre><p>Rematerializes a tensor from the underlying sparse storage format of the given tensor. This is similar to the <code>bufferization.to_tensor</code> operation in the sense that it provides a bridge between a bufferized world view and a tensor world view. Unlike the <code>bufferization.to_tensor</code> operation, however, this sparse operation is used only temporarily to maintain a correctly typed intermediate representation during progressive bufferization.</p><p>The <code>hasInserts</code> attribute denote whether insertions to the underlying sparse storage format may have occurred, in which case the underlying sparse storage format needs to be finalized. Otherwise, the operation simply folds away.</p><p>Note that this operation is “impure” in the sense that even though the result is modeled through an SSA value, the operation relies on a proper context of materializing and inserting the tensor value.</p><p>Examples:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%result</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>load <span class=nv>%tensor</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>8x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SV</span><span class=p>></span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=nv>%1</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>load <span class=nv>%0</span> hasInserts <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>16x32x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>SameOperandsAndResultType</code></p><p>Interfaces: <code>InferTypeOpInterface</code></p><h4 id=attributes-8>Attributes: <a class=headline-hash href=#attributes-8>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>hasInserts</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-15>Operands: <a class=headline-hash href=#operands-15>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tensor</code></td><td>sparse tensor of any type values</td></tr></tbody></table><h4 id=results-16>Results: <a class=headline-hash href=#results-16>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>tensor of any type values</td></tr></tbody></table><h3 id=sparse_tensorlvl-sparse_tensorlvlop><code>sparse_tensor.lvl</code> (sparse_tensor::LvlOp) <a class=headline-hash href=#sparse_tensorlvl-sparse_tensorlvlop>¶</a></h3><p><em>Level index operation</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.lvl` attr-dict $source `,` $index `:` type($source) </code></pre><p>The <code>sparse_tensor.lvl</code> behaves similar to <code>tensor.dim</code> operation. It takes a sparse tensor and a level operand of type <code>index</code> and returns the size of the requested level of the given sparse tensor. If the sparse tensor has an identity dimension to level mapping, it returns the same result as <code>tensor.dim</code>. If the level index is out of bounds, the behavior is undefined.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>#BSR</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span> i<span class=p>,</span> j <span class=p>)</span> <span class=p>-></span> </span></span><span class=line><span class=cl> <span class=p>(</span> i floordiv <span class=m>2</span> <span class=p>:</span> dense<span class=p>,</span> </span></span><span class=line><span class=cl> j floordiv <span class=m>3</span> <span class=p>:</span> compressed<span class=p>,</span> </span></span><span class=line><span class=cl> i mod <span class=m>2</span> <span class=p>:</span> dense<span class=p>,</span> </span></span><span class=line><span class=cl> j mod <span class=m>3</span> <span class=p>:</span> dense </span></span><span class=line><span class=cl> <span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Always returns 2 (4 floordiv 2), can be constant folded: </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>%c0</span> <span class=p>=</span> arith<span class=p>.</span><span class=kt>constant</span> <span class=m>0</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl><span class=nv>%x</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>lvl <span class=nv>%A</span><span class=p>,</span> <span class=nv>%c0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>4x?x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#BSR</span><span class=p>></span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Return the dynamic dimension of %A computed by %j mod 3. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>%c1</span> <span class=p>=</span> arith<span class=p>.</span><span class=kt>constant</span> <span class=m>1</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl><span class=nv>%y</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>lvl <span class=nv>%A</span><span class=p>,</span> <span class=nv>%c1</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>4x?x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#BSR</span><span class=p>></span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Always return 3 (since j mod 3 < 3), can be constant fold </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>%c3</span> <span class=p>=</span> arith<span class=p>.</span><span class=kt>constant</span> <span class=m>3</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl><span class=nv>%y</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>lvl <span class=nv>%A</span><span class=p>,</span> <span class=nv>%c3</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>4x?x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#BSR</span><span class=p>></span> </span></span></code></pre></div><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>InferTypeOpInterface</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=operands-16>Operands: <a class=headline-hash href=#operands-16>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>source</code></td><td>sparse tensor of any type values</td></tr><tr><td style=text-align:center><code>index</code></td><td>index</td></tr></tbody></table><h4 id=results-17>Results: <a class=headline-hash href=#results-17>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>index</td></tr></tbody></table><h3 id=sparse_tensornew-sparse_tensornewop><code>sparse_tensor.new</code> (sparse_tensor::NewOp) <a class=headline-hash href=#sparse_tensornew-sparse_tensornewop>¶</a></h3><p><em>Materializes a new sparse tensor from given source</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.new` $source attr-dict `:` type($source) `to` type($result) </code></pre><p>Materializes a sparse tensor with contents taken from an opaque pointer provided by <code>source</code>. For targets that have access to a file system, for example, this pointer may be a filename (or file) of a sparse tensor in a particular external storage format. The form of the operation is kept deliberately very general to allow for alternative implementations in the future, such as pointers to buffers or runnable initialization code. The operation is provided as an anchor that materializes a properly typed sparse tensor with inital contents into a computation.</p><p>Reading in a symmetric matrix will result in just the lower/upper triangular part of the matrix (so that only relevant information is stored). Proper symmetry support for operating on symmetric matrices is still TBD.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>sparse_tensor<span class=p>.</span>new <span class=nv>%source</span> <span class=p>:</span> <span class=p>!</span>Source to <span class=kt>tensor</span><span class=p><</span><span class=m>1024x1024x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=operands-17>Operands: <a class=headline-hash href=#operands-17>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>source</code></td><td>any type</td></tr></tbody></table><h4 id=results-18>Results: <a class=headline-hash href=#results-18>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>sparse tensor of any type values</td></tr></tbody></table><h3 id=sparse_tensornumber_of_entries-sparse_tensornumberofentriesop><code>sparse_tensor.number_of_entries</code> (sparse_tensor::NumberOfEntriesOp) <a class=headline-hash href=#sparse_tensornumber_of_entries-sparse_tensornumberofentriesop>¶</a></h3><p><em>Returns the number of entries that are stored in the tensor.</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.number_of_entries` $tensor attr-dict `:` type($tensor) </code></pre><p>Returns the number of entries that are stored in the given sparse tensor. Note that this is typically the number of nonzero elements in the tensor, but since explicit zeros may appear in the storage formats, the more accurate nomenclature is used.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%noe</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>number_of_entries <span class=nv>%tensor</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>64x64x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>InferTypeOpInterface</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=operands-18>Operands: <a class=headline-hash href=#operands-18>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tensor</code></td><td>sparse tensor of any type values</td></tr></tbody></table><h4 id=results-19>Results: <a class=headline-hash href=#results-19>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>index</td></tr></tbody></table><h3 id=sparse_tensorout-sparse_tensoroutop><code>sparse_tensor.out</code> (sparse_tensor::OutOp) <a class=headline-hash href=#sparse_tensorout-sparse_tensoroutop>¶</a></h3><p><em>Outputs a sparse tensor to the given destination</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.out` $tensor `,` $dest attr-dict `:` type($tensor) `,` type($dest) </code></pre><p>Outputs the contents of a sparse tensor to the destination defined by an opaque pointer provided by <code>dest</code>. For targets that have access to a file system, for example, this pointer may specify a filename (or file) for output. The form of the operation is kept deliberately very general to allow for alternative implementations in the future, such as sending the contents to a buffer defined by a pointer.</p><p>Note that this operation is “impure” in the sense that its behavior is solely defined by side-effects and not SSA values.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>sparse_tensor<span class=p>.</span>out <span class=nv>%t</span><span class=p>,</span> <span class=nv>%dest</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>1024x1024x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>>,</span> <span class=p>!</span>Dest </span></span></code></pre></div><h4 id=operands-19>Operands: <a class=headline-hash href=#operands-19>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tensor</code></td><td>sparse tensor of any type values</td></tr><tr><td style=text-align:center><code>dest</code></td><td>any type</td></tr></tbody></table><h3 id=sparse_tensorpositions-sparse_tensortopositionsop><code>sparse_tensor.positions</code> (sparse_tensor::ToPositionsOp) <a class=headline-hash href=#sparse_tensorpositions-sparse_tensortopositionsop>¶</a></h3><p><em>Extracts the <code>level</code>-th positions array of the <code>tensor</code></em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.positions` $tensor attr-dict `:` type($tensor) `to` type($result) </code></pre><p>Returns the positions array of the tensor’s storage at the given level. This is similar to the <code>bufferization.to_memref</code> operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the <code>bufferization.to_memref</code> operation, however, this sparse operation actually lowers into code that extracts the positions array from the sparse storage itself (either by calling a support library or through direct code).</p><p>Writing into the result of this operation is undefined behavior.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%1</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>positions <span class=nv>%0</span> <span class=p>{</span> <span class=nl>level =</span> <span class=m>1</span> <span class=p>:</span> <span class=k>index</span> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>64x64x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> to <span class=kt>memref</span><span class=p><</span><span class=m>?x</span><span class=k>index</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>InferTypeOpInterface</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=attributes-9>Attributes: <a class=headline-hash href=#attributes-9>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>level</code></td><td>::mlir::IntegerAttr</td><td>level attribute</td></tr></table><h4 id=operands-20>Operands: <a class=headline-hash href=#operands-20>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tensor</code></td><td>sparse tensor of any type values</td></tr></tbody></table><h4 id=results-20>Results: <a class=headline-hash href=#results-20>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>non-0-ranked.memref of any type values</td></tr></tbody></table><h3 id=sparse_tensorprint-sparse_tensorprintop><code>sparse_tensor.print</code> (sparse_tensor::PrintOp) <a class=headline-hash href=#sparse_tensorprint-sparse_tensorprintop>¶</a></h3><p><em>Prints a sparse tensor (for testing and debugging)</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.print` $tensor attr-dict `:` type($tensor) </code></pre><p>Prints the individual components of a sparse tensors (the positions, coordinates, and values components) to stdout for testing and debugging purposes. This operation lowers to just a few primitives in a light-weight runtime support to simplify supporting this operation on new platforms.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>sparse_tensor<span class=p>.</span>print <span class=nv>%tensor</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>1024x1024x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span></code></pre></div><h4 id=operands-21>Operands: <a class=headline-hash href=#operands-21>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tensor</code></td><td>sparse tensor of any type values</td></tr></tbody></table><h3 id=sparse_tensorpush_back-sparse_tensorpushbackop><code>sparse_tensor.push_back</code> (sparse_tensor::PushBackOp) <a class=headline-hash href=#sparse_tensorpush_back-sparse_tensorpushbackop>¶</a></h3><p><em>Pushes a value to the back of a given buffer</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.push_back` (`inbounds` $inbounds^)? $curSize `,` $inBuffer `,` $value (`,` $n^ )? attr-dict `:` type($curSize) `,` type($inBuffer) `,` type($value) (`,` type($n)^ )? </code></pre><p>Pushes <code>value</code> to the end of the given sparse tensor storage buffer <code>inBuffer</code> as indicated by the value of <code>curSize</code> and returns the new size of the buffer in <code>newSize</code> (<code>newSize = curSize + n</code>). The capacity of the buffer is recorded in the memref type of <code>inBuffer</code>. If the current buffer is full, then <code>inBuffer.realloc</code> is called before pushing the data to the buffer. This is similar to std::vector push_back.</p><p>The optional input <code>n</code> specifies the number of times to repeately push the value to the back of the tensor. When <code>n</code> is a compile-time constant, its value can’t be less than 1. If <code>n</code> is a runtime value that is less than 1, the behavior is undefined. Although using input <code>n</code> is semantically equivalent to calling push_back n times, it gives compiler more chances to to optimize the memory reallocation and the filling of the memory with the same value.</p><p>The <code>inbounds</code> attribute tells the compiler that the insertion won’t go beyond the current storage buffer. This allows the compiler to not generate the code for capacity check and reallocation. The typical usage will be for “dynamic” sparse tensors for which a capacity can be set beforehand.</p><p>Note that this operation is “impure” in the sense that even though the result is modeled through an SSA value, referencing the memref through the old SSA value after this operation is undefined behavior.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%buf</span><span class=p>,</span> <span class=nv>%newSize</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>push_back <span class=nv>%curSize</span><span class=p>,</span> <span class=nv>%buffer</span><span class=p>,</span> <span class=nv>%val</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=k>index</span><span class=p>,</span> <span class=kt>memref</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>>,</span> <span class=k>f64</span> </span></span></code></pre></div><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%buf</span><span class=p>,</span> <span class=nv>%newSize</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>push_back inbounds <span class=nv>%curSize</span><span class=p>,</span> <span class=nv>%buffer</span><span class=p>,</span> <span class=nv>%val</span> </span></span><span class=line><span class=cl> <span class=p>:</span> xindex<span class=p>,</span> <span class=kt>memref</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>>,</span> <span class=k>f64</span> </span></span></code></pre></div><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%buf</span><span class=p>,</span> <span class=nv>%newSize</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>push_back inbounds <span class=nv>%curSize</span><span class=p>,</span> <span class=nv>%buffer</span><span class=p>,</span> <span class=nv>%val</span><span class=p>,</span> <span class=nv>%n</span> </span></span><span class=line><span class=cl> <span class=p>:</span> xindex<span class=p>,</span> <span class=kt>memref</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>>,</span> <span class=k>f64</span> </span></span></code></pre></div><p>Interfaces: <code>InferTypeOpInterface</code></p><h4 id=attributes-10>Attributes: <a class=headline-hash href=#attributes-10>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>inbounds</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-22>Operands: <a class=headline-hash href=#operands-22>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>curSize</code></td><td>index</td></tr><tr><td style=text-align:center><code>inBuffer</code></td><td>1D memref of any type values</td></tr><tr><td style=text-align:center><code>value</code></td><td>any type</td></tr><tr><td style=text-align:center><code>n</code></td><td>index</td></tr></tbody></table><h4 id=results-21>Results: <a class=headline-hash href=#results-21>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>outBuffer</code></td><td>1D memref of any type values</td></tr><tr><td style=text-align:center><code>newSize</code></td><td>index</td></tr></tbody></table><h3 id=sparse_tensorreduce-sparse_tensorreduceop><code>sparse_tensor.reduce</code> (sparse_tensor::ReduceOp) <a class=headline-hash href=#sparse_tensorreduce-sparse_tensorreduceop>¶</a></h3><p><em>Custom reduction operation utilized within linalg.generic</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.reduce` $x `,` $y `,` $identity attr-dict `:` type($output) $region </code></pre><p>Defines a computation with a <code>linalg.generic</code> operation that takes two operands and an identity value and reduces all stored values down to a single result based on the computation in the region.</p><p>The region must contain exactly one block taking two arguments. The block must end with a sparse_tensor.yield and the output must match the input argument types.</p><p>Note that this operation is only required for custom reductions beyond the standard reduction operations (add, sub, or, xor) that can be sparsified by merely reducing the stored values. More elaborate reduction operations (mul, and, min, max, etc.) would need to account for implicit zeros as well. They can still be handled using this custom reduction operation. The <code>linalg.generic</code> <code>iterator_types</code> defines which indices are being reduced. When the associated operands are used in an operation, a reduction will occur. The use of this explicit <code>reduce</code> operation is not required in most cases.</p><p>Example of Matrix->Vector reduction using max(product(x_i), 100):</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%cf1</span> <span class=p>=</span> arith<span class=p>.</span><span class=kt>constant</span> <span class=m>1.0</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl><span class=nv>%cf100</span> <span class=p>=</span> arith<span class=p>.</span><span class=kt>constant</span> <span class=m>100.0</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl><span class=nv>%C</span> <span class=p>=</span> <span class=kt>tensor</span><span class=p>.</span>empty<span class=p>(...)</span> </span></span><span class=line><span class=cl><span class=nv>%0</span> <span class=p>=</span> linalg<span class=p>.</span>generic <span class=nv>#trait</span> </span></span><span class=line><span class=cl> ins<span class=p>(</span><span class=nv>%A</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SparseMatrix</span><span class=p>>)</span> </span></span><span class=line><span class=cl> outs<span class=p>(</span><span class=nv>%C</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%a</span><span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=nv>%c</span><span class=p>:</span> <span class=k>f64</span><span class=p>)</span> <span class=p>:</span> </span></span><span class=line><span class=cl> <span class=nv>%result</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>reduce <span class=nv>%c</span><span class=p>,</span> <span class=nv>%a</span><span class=p>,</span> <span class=nv>%cf1</span> <span class=p>:</span> <span class=k>f64</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%arg0</span><span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=nv>%arg1</span><span class=p>:</span> <span class=k>f64</span><span class=p>):</span> </span></span><span class=line><span class=cl> <span class=nv>%0</span> <span class=p>=</span> arith<span class=p>.</span>mulf <span class=nv>%arg0</span><span class=p>,</span> <span class=nv>%arg1</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> <span class=nv>%cmp</span> <span class=p>=</span> arith<span class=p>.</span>cmpf <span class=s>"ogt"</span><span class=p>,</span> <span class=nv>%0</span><span class=p>,</span> <span class=nv>%cf100</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> <span class=nv>%ret</span> <span class=p>=</span> arith<span class=p>.</span>select <span class=nv>%cmp</span><span class=p>,</span> <span class=nv>%cf100</span><span class=p>,</span> <span class=nv>%0</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>yield <span class=nv>%ret</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> linalg<span class=p>.</span>yield <span class=nv>%result</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl><span class=p>}</span> <span class=p>-></span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code>, <code>SameOperandsAndResultType</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>InferTypeOpInterface</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=operands-23>Operands: <a class=headline-hash href=#operands-23>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>x</code></td><td>any type</td></tr><tr><td style=text-align:center><code>y</code></td><td>any type</td></tr><tr><td style=text-align:center><code>identity</code></td><td>any type</td></tr></tbody></table><h4 id=results-22>Results: <a class=headline-hash href=#results-22>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>output</code></td><td>any type</td></tr></tbody></table><h3 id=sparse_tensorreinterpret_map-sparse_tensorreinterpretmapop><code>sparse_tensor.reinterpret_map</code> (sparse_tensor::ReinterpretMapOp) <a class=headline-hash href=#sparse_tensorreinterpret_map-sparse_tensorreinterpretmapop>¶</a></h3><p><em>Reinterprets the dimension/level maps of the source tensor</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.reinterpret_map` $source attr-dict `:` type($source) `to` type($dest) </code></pre><p>Reinterprets the dimension-to-level and level-to-dimension map specified in <code>source</code> according to the type of <code>dest</code>. <code>reinterpret_map</code> is a no-op and is introduced merely to resolve type conflicts. It does not make any modification to the source tensor and source/dest tensors are considered to be aliases.</p><p><code>source</code> and <code>dest</code> tensors are “reinterpretable” if and only if they have the exactly same storage at a low level. That is, both <code>source</code> and <code>dest</code> has the same number of levels and level types, and their shape is consistent before and after <code>reinterpret_map</code>.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>#CSC</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>d0<span class=p>,</span> d1<span class=p>)</span> <span class=p>-></span> <span class=p>(</span>d1<span class=p>:</span> dense<span class=p>,</span> d0<span class=p>:</span> compressed<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=nv>#CSR</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>d0<span class=p>,</span> d1<span class=p>)</span> <span class=p>-></span> <span class=p>(</span>d0<span class=p>:</span> dense<span class=p>,</span> d1<span class=p>:</span> compressed<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=nv>%t1</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>reinterpret_map <span class=nv>%t0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>3x4x</span><span class=k>i32</span><span class=p>,</span> <span class=nv>#CSC</span><span class=p>></span> to <span class=kt>tensor</span><span class=p><</span><span class=m>4x3x</span><span class=k>i32</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=nv>#BSR</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span> i<span class=p>,</span> j <span class=p>)</span> <span class=p>-></span> <span class=p>(</span> i floordiv <span class=m>2</span> <span class=p>:</span> dense<span class=p>,</span> </span></span><span class=line><span class=cl> j floordiv <span class=m>3</span> <span class=p>:</span> compressed<span class=p>,</span> </span></span><span class=line><span class=cl> i mod <span class=m>2</span> <span class=p>:</span> dense<span class=p>,</span> </span></span><span class=line><span class=cl> j mod <span class=m>3</span> <span class=p>:</span> dense </span></span><span class=line><span class=cl> <span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=nv>#DSDD</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>,</span> j<span class=p>,</span> k<span class=p>,</span> l<span class=p>)</span> <span class=p>-></span> <span class=p>(</span>i<span class=p>:</span> dense<span class=p>,</span> j<span class=p>:</span> compressed<span class=p>,</span> k<span class=p>:</span> dense<span class=p>,</span> l<span class=p>:</span> dense<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=nv>%t1</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>reinterpret_map <span class=nv>%t0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>6x12x</span><span class=k>i32</span><span class=p>,</span> <span class=nv>#BSR</span><span class=p>></span> to <span class=kt>tensor</span><span class=p><</span><span class=m>3x4x2x3x</span><span class=k>i32</span><span class=p>,</span> <span class=nv>#DSDD</span><span class=p>></span> </span></span></code></pre></div><p>Interfaces: <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=operands-24>Operands: <a class=headline-hash href=#operands-24>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>source</code></td><td>sparse tensor of any type values</td></tr></tbody></table><h4 id=results-23>Results: <a class=headline-hash href=#results-23>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>dest</code></td><td>sparse tensor of any type values</td></tr></tbody></table><h3 id=sparse_tensorreorder_coo-sparse_tensorreordercooop><code>sparse_tensor.reorder_coo</code> (sparse_tensor::ReorderCOOOp) <a class=headline-hash href=#sparse_tensorreorder_coo-sparse_tensorreordercooop>¶</a></h3><p><em>Reorder the input COO such that it has the the same order as the output COO</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.reorder_coo` $algorithm $input_coo attr-dict`:` type($input_coo) `to` type($result_coo) </code></pre><p>Reorders the input COO to the same order as specified by the output format. E.g., reorder an unordered COO into an ordered one.</p><p>The input and result COO tensor must have the same element type, position type and coordinate type. At the moment, the operation also only supports ordering input and result COO with the same dim2lvl map.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%res</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>reorder_coo quick_sort <span class=nv>%coo</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span> <span class=p>:</span> <span class=nv>#Unordered_COO</span><span class=p>></span> to </span></span><span class=line><span class=cl> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span> <span class=p>:</span> <span class=nv>#Ordered_COO</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=attributes-11>Attributes: <a class=headline-hash href=#attributes-11>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>algorithm</code></td><td>::mlir::sparse_tensor::SparseTensorSortKindAttr</td><td><details><summary>sparse tensor sort algorithm</summary><p>Enum cases:</p><ul><li>hybrid_quick_sort (<code>HybridQuickSort</code>)</li><li>insertion_sort_stable (<code>InsertionSortStable</code>)</li><li>quick_sort (<code>QuickSort</code>)</li><li>heap_sort (<code>HeapSort</code>)</li></ul></details></td></tr></table><h4 id=operands-25>Operands: <a class=headline-hash href=#operands-25>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>input_coo</code></td><td>sparse tensor of any type values</td></tr></tbody></table><h4 id=results-24>Results: <a class=headline-hash href=#results-24>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result_coo</code></td><td>sparse tensor of any type values</td></tr></tbody></table><h3 id=sparse_tensorselect-sparse_tensorselectop><code>sparse_tensor.select</code> (sparse_tensor::SelectOp) <a class=headline-hash href=#sparse_tensorselect-sparse_tensorselectop>¶</a></h3><p><em>Select operation utilized within linalg.generic</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.select` $x attr-dict `:` type($x) $region </code></pre><p>Defines an evaluation within a <code>linalg.generic</code> operation that takes a single operand and decides whether or not to keep that operand in the output.</p><p>A single region must contain exactly one block taking one argument. The block must end with a sparse_tensor.yield and the output type must be boolean.</p><p>Value threshold is an obvious usage of the select operation. However, by using <code>linalg.index</code>, other useful selection can be achieved, such as selecting the upper triangle of a matrix.</p><p>Example of selecting A >= 4.0:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%C</span> <span class=p>=</span> <span class=kt>tensor</span><span class=p>.</span>empty<span class=p>(...)</span> </span></span><span class=line><span class=cl><span class=nv>%0</span> <span class=p>=</span> linalg<span class=p>.</span>generic <span class=nv>#trait</span> </span></span><span class=line><span class=cl> ins<span class=p>(</span><span class=nv>%A</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>>)</span> </span></span><span class=line><span class=cl> outs<span class=p>(</span><span class=nv>%C</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%a</span><span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=nv>%c</span><span class=p>:</span> <span class=k>f64</span><span class=p>)</span> <span class=p>:</span> </span></span><span class=line><span class=cl> <span class=nv>%result</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>select <span class=nv>%a</span> <span class=p>:</span> <span class=k>f64</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%arg0</span><span class=p>:</span> <span class=k>f64</span><span class=p>):</span> </span></span><span class=line><span class=cl> <span class=nv>%cf4</span> <span class=p>=</span> arith<span class=p>.</span><span class=kt>constant</span> <span class=m>4.0</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> <span class=nv>%keep</span> <span class=p>=</span> arith<span class=p>.</span>cmpf <span class=s>"uge"</span><span class=p>,</span> <span class=nv>%arg0</span><span class=p>,</span> <span class=nv>%cf4</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>yield <span class=nv>%keep</span> <span class=p>:</span> <span class=k>i1</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> linalg<span class=p>.</span>yield <span class=nv>%result</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl><span class=p>}</span> <span class=p>-></span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>></span> </span></span></code></pre></div><p>Example of selecting lower triangle of a matrix:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%C</span> <span class=p>=</span> <span class=kt>tensor</span><span class=p>.</span>empty<span class=p>(...)</span> </span></span><span class=line><span class=cl><span class=nv>%1</span> <span class=p>=</span> linalg<span class=p>.</span>generic <span class=nv>#trait</span> </span></span><span class=line><span class=cl> ins<span class=p>(</span><span class=nv>%A</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>>)</span> </span></span><span class=line><span class=cl> outs<span class=p>(</span><span class=nv>%C</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%a</span><span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=nv>%c</span><span class=p>:</span> <span class=k>f64</span><span class=p>)</span> <span class=p>:</span> </span></span><span class=line><span class=cl> <span class=nv>%row</span> <span class=p>=</span> linalg<span class=p>.</span><span class=k>index</span> <span class=m>0</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl> <span class=nv>%col</span> <span class=p>=</span> linalg<span class=p>.</span><span class=k>index</span> <span class=m>1</span> <span class=p>:</span> <span class=k>index</span> </span></span><span class=line><span class=cl> <span class=nv>%result</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>select <span class=nv>%a</span> <span class=p>:</span> <span class=k>f64</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%arg0</span><span class=p>:</span> <span class=k>f64</span><span class=p>):</span> </span></span><span class=line><span class=cl> <span class=nv>%keep</span> <span class=p>=</span> arith<span class=p>.</span>cmpf <span class=s>"olt"</span><span class=p>,</span> <span class=nv>%col</span><span class=p>,</span> <span class=nv>%row</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>yield <span class=nv>%keep</span> <span class=p>:</span> <span class=k>i1</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> linalg<span class=p>.</span>yield <span class=nv>%result</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl><span class=p>}</span> <span class=p>-></span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code>, <code>SameOperandsAndResultType</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>InferTypeOpInterface</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=operands-26>Operands: <a class=headline-hash href=#operands-26>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>x</code></td><td>any type</td></tr></tbody></table><h4 id=results-25>Results: <a class=headline-hash href=#results-25>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>output</code></td><td>any type</td></tr></tbody></table><h3 id=sparse_tensorsliceoffset-sparse_tensortosliceoffsetop><code>sparse_tensor.slice.offset</code> (sparse_tensor::ToSliceOffsetOp) <a class=headline-hash href=#sparse_tensorsliceoffset-sparse_tensortosliceoffsetop>¶</a></h3><p><em>Extracts the offset of the sparse tensor slice at the given dimension</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.slice.offset` $slice `at` $dim attr-dict `:` type($slice) </code></pre><p>Extracts the offset of the sparse tensor slice at the given dimension.</p><p>Currently, sparse tensor slices are still a work in progress, and only works when runtime library is disabled (i.e., running the sparsifier with <code>enable-runtime-library=false</code>).</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%0</span> <span class=p>=</span> <span class=kt>tensor</span><span class=p>.</span>extract_slice <span class=nv>%s</span><span class=p>[</span><span class=nv>%v1</span><span class=p>,</span> <span class=nv>%v2</span><span class=p>][</span><span class=m>64</span><span class=p>,</span> <span class=m>64</span><span class=p>][</span><span class=m>1</span><span class=p>,</span> <span class=m>1</span><span class=p>]</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>128x128x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#DCSR</span><span class=p>></span> </span></span><span class=line><span class=cl> to <span class=kt>tensor</span><span class=p><</span><span class=m>64x64x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#Slice</span><span class=p>></span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=nv>%1</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>slice<span class=p>.</span>offset <span class=nv>%0</span> at <span class=m>0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>64x64x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#Slice</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%2</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>slice<span class=p>.</span>offset <span class=nv>%0</span> at <span class=m>1</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>64x64x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#Slice</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=c>// %1 = %v1 </span></span></span><span class=line><span class=cl><span class=c>// %2 = %v2 </span></span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>InferTypeOpInterface</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=attributes-12>Attributes: <a class=headline-hash href=#attributes-12>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>dim</code></td><td>::mlir::IntegerAttr</td><td>index attribute</td></tr></table><h4 id=operands-27>Operands: <a class=headline-hash href=#operands-27>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>slice</code></td><td>sparse tensor slice of any type values</td></tr></tbody></table><h4 id=results-26>Results: <a class=headline-hash href=#results-26>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>offset</code></td><td>index</td></tr></tbody></table><h3 id=sparse_tensorslicestride-sparse_tensortoslicestrideop><code>sparse_tensor.slice.stride</code> (sparse_tensor::ToSliceStrideOp) <a class=headline-hash href=#sparse_tensorslicestride-sparse_tensortoslicestrideop>¶</a></h3><p><em>Extracts the stride of the sparse tensor slice at the given dimension</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.slice.stride` $slice `at` $dim attr-dict `:` type($slice) </code></pre><p>Extracts the stride of the sparse tensor slice at the given dimension.</p><p>Currently, sparse tensor slices are still a work in progress, and only works when runtime library is disabled (i.e., running the sparsifier with <code>enable-runtime-library=false</code>).</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%0</span> <span class=p>=</span> <span class=kt>tensor</span><span class=p>.</span>extract_slice <span class=nv>%s</span><span class=p>[</span><span class=nv>%v1</span><span class=p>,</span> <span class=nv>%v2</span><span class=p>][</span><span class=m>64</span><span class=p>,</span> <span class=m>64</span><span class=p>][</span><span class=nv>%s1</span><span class=p>,</span> <span class=nv>%s2</span><span class=p>]</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>128x128x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#DCSR</span><span class=p>></span> </span></span><span class=line><span class=cl> to <span class=kt>tensor</span><span class=p><</span><span class=m>64x64x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#Slice</span><span class=p>></span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=nv>%1</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>slice<span class=p>.</span>stride <span class=nv>%0</span> at <span class=m>0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>64x64x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#Slice</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%2</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>slice<span class=p>.</span>stride <span class=nv>%0</span> at <span class=m>1</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>64x64x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#Slice</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=c>// %1 = %s1 </span></span></span><span class=line><span class=cl><span class=c>// %2 = %s2 </span></span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>InferTypeOpInterface</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=attributes-13>Attributes: <a class=headline-hash href=#attributes-13>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>dim</code></td><td>::mlir::IntegerAttr</td><td>index attribute</td></tr></table><h4 id=operands-28>Operands: <a class=headline-hash href=#operands-28>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>slice</code></td><td>sparse tensor slice of any type values</td></tr></tbody></table><h4 id=results-27>Results: <a class=headline-hash href=#results-27>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>stride</code></td><td>index</td></tr></tbody></table><h3 id=sparse_tensorsort-sparse_tensorsortop><code>sparse_tensor.sort</code> (sparse_tensor::SortOp) <a class=headline-hash href=#sparse_tensorsort-sparse_tensorsortop>¶</a></h3><p><em>Sorts the arrays in xs and ys lexicographically on the integral values found in the xs list</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.sort` $algorithm $n`,`$xy (`jointly` $ys^)? attr-dict`:` type($xy) (`jointly` type($ys)^)? </code></pre><p>Sorts the <code>xs</code> values along with some <code>ys</code> values that are put in a single linear buffer <code>xy</code>. The affine map attribute <code>perm_map</code> specifies the permutation to be applied on the <code>xs</code> before comparison, the rank of the permutation map also specifies the number of <code>xs</code> values in <code>xy</code>. The optional index attribute <code>ny</code> provides the number of <code>ys</code> values in <code>xy</code>. When <code>ny</code> is not explicitly specified, its value is 0. This instruction supports a more efficient way to store the COO definition in sparse tensor type.</p><p>The buffer xy should have a dimension not less than n * (rank(perm_map) + ny) while the buffers in <code>ys</code> should have a dimension not less than <code>n</code>. The behavior of the operator is undefined if this condition is not met.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>sparse_tensor<span class=p>.</span>sort insertion_sort_stable <span class=nv>%n</span><span class=p>,</span> <span class=nv>%x</span> <span class=p>{</span> <span class=nl>perm_map =</span> affine_map<span class=p><(</span>i<span class=p>,</span>j<span class=p>)</span> <span class=p>-></span> <span class=p>(</span>j<span class=p>,</span>i<span class=p>)></span> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=kt>memref</span><span class=p><</span><span class=m>?x</span><span class=k>index</span><span class=p>></span> </span></span></code></pre></div><h4 id=attributes-14>Attributes: <a class=headline-hash href=#attributes-14>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>perm_map</code></td><td>::mlir::AffineMapAttr</td><td>AffineMap attribute</td></tr><tr><td><code>ny</code></td><td>::mlir::IntegerAttr</td><td>index attribute</td></tr><tr><td><code>algorithm</code></td><td>::mlir::sparse_tensor::SparseTensorSortKindAttr</td><td><details><summary>sparse tensor sort algorithm</summary><p>Enum cases:</p><ul><li>hybrid_quick_sort (<code>HybridQuickSort</code>)</li><li>insertion_sort_stable (<code>InsertionSortStable</code>)</li><li>quick_sort (<code>QuickSort</code>)</li><li>heap_sort (<code>HeapSort</code>)</li></ul></details></td></tr></table><h4 id=operands-29>Operands: <a class=headline-hash href=#operands-29>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>n</code></td><td>index</td></tr><tr><td style=text-align:center><code>xy</code></td><td>1D memref of integer or index values</td></tr><tr><td style=text-align:center><code>ys</code></td><td>variadic of 1D memref of any type values</td></tr></tbody></table><h3 id=sparse_tensorstorage_specifierget-sparse_tensorgetstoragespecifierop><code>sparse_tensor.storage_specifier.get</code> (sparse_tensor::GetStorageSpecifierOp) <a class=headline-hash href=#sparse_tensorstorage_specifierget-sparse_tensorgetstoragespecifierop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.storage_specifier.get` $specifier $specifierKind (`at` $level^)? attr-dict`:` qualified(type($specifier)) </code></pre><p>Returns the requested field of the given storage_specifier.</p><p>Example of querying the size of the coordinates array for level 0:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%0</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>storage_specifier<span class=p>.</span>get <span class=nv>%arg0</span> crd_mem_sz at <span class=m>0</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>!</span>sparse_tensor<span class=p>.</span>storage_specifier<span class=p><</span><span class=nv>#COO</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>InferTypeOpInterface</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=attributes-15>Attributes: <a class=headline-hash href=#attributes-15>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>specifierKind</code></td><td>::mlir::sparse_tensor::StorageSpecifierKindAttr</td><td><details><summary>sparse tensor storage specifier kind</summary><p>Enum cases:</p><ul><li>lvl_sz (<code>LvlSize</code>)</li><li>pos_mem_sz (<code>PosMemSize</code>)</li><li>crd_mem_sz (<code>CrdMemSize</code>)</li><li>val_mem_sz (<code>ValMemSize</code>)</li><li>dim_offset (<code>DimOffset</code>)</li><li>dim_stride (<code>DimStride</code>)</li></ul></details></td></tr><tr><td><code>level</code></td><td>::mlir::IntegerAttr</td><td>level attribute</td></tr></table><h4 id=operands-30>Operands: <a class=headline-hash href=#operands-30>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>specifier</code></td><td>metadata</td></tr></tbody></table><h4 id=results-28>Results: <a class=headline-hash href=#results-28>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>index</td></tr></tbody></table><h3 id=sparse_tensorstorage_specifierinit-sparse_tensorstoragespecifierinitop><code>sparse_tensor.storage_specifier.init</code> (sparse_tensor::StorageSpecifierInitOp) <a class=headline-hash href=#sparse_tensorstorage_specifierinit-sparse_tensorstoragespecifierinitop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.storage_specifier.init` attr-dict (`with` $source^)? `:` (`from` qualified(type($source))^ `to`)? qualified(type($result)) </code></pre><p>Returns an initial storage specifier value. A storage specifier value holds the level-sizes, position arrays, coordinate arrays, and the value array. If this is a specifier for slices, it also holds the extra strides/offsets for each tensor dimension.</p><p>TODO: The sparse tensor slice support is currently in a unstable state, and is subject to change in the future.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>#CSR</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>,</span> j<span class=p>)</span> <span class=p>-></span> <span class=p>(</span>i <span class=p>:</span> dense<span class=p>,</span> j <span class=p>:</span> compressed<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=nv>#CSR_SLICE</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>d0 <span class=p>:</span> <span class=nv>#sparse_tensor</span><span class=p><</span>slice<span class=p>(</span><span class=m>1</span><span class=p>,</span> <span class=m>4</span><span class=p>,</span> <span class=m>1</span><span class=p>)>,</span> </span></span><span class=line><span class=cl> d1 <span class=p>:</span> <span class=nv>#sparse_tensor</span><span class=p><</span>slice<span class=p>(</span><span class=m>1</span><span class=p>,</span> <span class=m>4</span><span class=p>,</span> <span class=m>2</span><span class=p>)>)</span> <span class=p>-></span> </span></span><span class=line><span class=cl> <span class=p>(</span>d0 <span class=p>:</span> dense<span class=p>,</span> d1 <span class=p>:</span> compressed<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=nv>%0</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>storage_specifier<span class=p>.</span>init <span class=p>:</span> <span class=p>!</span>sparse_tensor<span class=p>.</span>storage_specifier<span class=p><</span><span class=nv>#CSR</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%1</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>storage_specifier<span class=p>.</span>init with <span class=nv>%src</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>!</span>sparse_tensor<span class=p>.</span>storage_specifier<span class=p><</span><span class=nv>#CSR</span><span class=p>></span> to </span></span><span class=line><span class=cl> <span class=p>!</span>sparse_tensor<span class=p>.</span>storage_specifier<span class=p><</span><span class=nv>#CSR_SLICE</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=operands-31>Operands: <a class=headline-hash href=#operands-31>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>source</code></td><td>metadata</td></tr></tbody></table><h4 id=results-29>Results: <a class=headline-hash href=#results-29>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>metadata</td></tr></tbody></table><h3 id=sparse_tensorstorage_specifierset-sparse_tensorsetstoragespecifierop><code>sparse_tensor.storage_specifier.set</code> (sparse_tensor::SetStorageSpecifierOp) <a class=headline-hash href=#sparse_tensorstorage_specifierset-sparse_tensorsetstoragespecifierop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.storage_specifier.set` $specifier $specifierKind (`at` $level^)? `with` $value attr-dict `:` qualified(type($result)) </code></pre><p>Set the field of the storage specifier to the given input value. Returns the updated storage_specifier as a new SSA value.</p><p>Example of updating the sizes of the coordinates array for level 0:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%0</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>storage_specifier<span class=p>.</span>set <span class=nv>%arg0</span> crd_mem_sz at <span class=m>0</span> with <span class=nv>%new_sz</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>!</span>sparse_tensor<span class=p>.</span>storage_specifier<span class=p><</span><span class=nv>#COO</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>InferTypeOpInterface</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=attributes-16>Attributes: <a class=headline-hash href=#attributes-16>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>specifierKind</code></td><td>::mlir::sparse_tensor::StorageSpecifierKindAttr</td><td><details><summary>sparse tensor storage specifier kind</summary><p>Enum cases:</p><ul><li>lvl_sz (<code>LvlSize</code>)</li><li>pos_mem_sz (<code>PosMemSize</code>)</li><li>crd_mem_sz (<code>CrdMemSize</code>)</li><li>val_mem_sz (<code>ValMemSize</code>)</li><li>dim_offset (<code>DimOffset</code>)</li><li>dim_stride (<code>DimStride</code>)</li></ul></details></td></tr><tr><td><code>level</code></td><td>::mlir::IntegerAttr</td><td>level attribute</td></tr></table><h4 id=operands-32>Operands: <a class=headline-hash href=#operands-32>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>specifier</code></td><td>metadata</td></tr><tr><td style=text-align:center><code>value</code></td><td>index</td></tr></tbody></table><h4 id=results-30>Results: <a class=headline-hash href=#results-30>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>metadata</td></tr></tbody></table><h3 id=sparse_tensorunary-sparse_tensorunaryop><code>sparse_tensor.unary</code> (sparse_tensor::UnaryOp) <a class=headline-hash href=#sparse_tensorunary-sparse_tensorunaryop>¶</a></h3><p><em>Unary set operation utilized within linalg.generic</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.unary` $x attr-dict `:` type($x) `to` type($output) `\n` `present` `=` $presentRegion `\n` `absent` `=` $absentRegion </code></pre><p>Defines a computation with a <code>linalg.generic</code> operation that takes a single operand and executes one of two regions depending on whether the operand is nonzero (i.e. stored explicitly in the sparse storage format).</p><p>Two regions are defined for the operation must appear in this order:</p><ul><li>present (elements present in the sparse tensor)</li><li>absent (elements not present in the sparse tensor)</li></ul><p>Each region contains a single block describing the computation and result. A non-empty block must end with a sparse_tensor.yield and the return type must match the type of <code>output</code>. The primary region’s block has one argument, while the missing region’s block has zero arguments. The absent region may only generate constants or values already computed on entry of the <code>linalg.generic</code> operation.</p><p>A region may also be declared empty (i.e. <code>absent={}</code>), indicating that the region does not contribute to the output.</p><p>Due to the possibility of empty regions, i.e. lack of a value for certain cases, the result of this operation may only feed directly into the output of the <code>linalg.generic</code> operation or into into a custom reduction <code>sparse_tensor.reduce</code> operation that follows in the same region.</p><p>Example of A+1, restricted to existing elements:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%C</span> <span class=p>=</span> <span class=kt>tensor</span><span class=p>.</span>empty<span class=p>(...)</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%0</span> <span class=p>=</span> linalg<span class=p>.</span>generic <span class=nv>#trait</span> </span></span><span class=line><span class=cl> ins<span class=p>(</span><span class=nv>%A</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>>)</span> </span></span><span class=line><span class=cl> outs<span class=p>(</span><span class=nv>%C</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%a</span><span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=nv>%c</span><span class=p>:</span> <span class=k>f64</span><span class=p>)</span> <span class=p>:</span> </span></span><span class=line><span class=cl> <span class=nv>%result</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>unary <span class=nv>%a</span> <span class=p>:</span> <span class=k>f64</span> to <span class=k>f64</span> </span></span><span class=line><span class=cl> <span class=nl>present=</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%arg0</span><span class=p>:</span> <span class=k>f64</span><span class=p>):</span> </span></span><span class=line><span class=cl> <span class=nv>%cf1</span> <span class=p>=</span> arith<span class=p>.</span><span class=kt>constant</span> <span class=m>1.0</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> <span class=nv>%ret</span> <span class=p>=</span> arith<span class=p>.</span>addf <span class=nv>%arg0</span><span class=p>,</span> <span class=nv>%cf1</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>yield <span class=nv>%ret</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=nl>absent=</span><span class=p>{}</span> </span></span><span class=line><span class=cl> linalg<span class=p>.</span>yield <span class=nv>%result</span> <span class=p>:</span> <span class=k>f64</span> </span></span><span class=line><span class=cl><span class=p>}</span> <span class=p>-></span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>></span> </span></span></code></pre></div><p>Example returning +1 for existing values and -1 for missing values:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%p1</span> <span class=p>=</span> arith<span class=p>.</span><span class=kt>constant</span> <span class=m>1</span> <span class=p>:</span> <span class=k>i32</span> </span></span><span class=line><span class=cl><span class=nv>%m1</span> <span class=p>=</span> arith<span class=p>.</span><span class=kt>constant</span> <span class=m>-1</span> <span class=p>:</span> <span class=k>i32</span> </span></span><span class=line><span class=cl><span class=nv>%C</span> <span class=p>=</span> <span class=kt>tensor</span><span class=p>.</span>empty<span class=p>(...)</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>i32</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%1</span> <span class=p>=</span> linalg<span class=p>.</span>generic <span class=nv>#trait</span> </span></span><span class=line><span class=cl> ins<span class=p>(</span><span class=nv>%A</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>>)</span> </span></span><span class=line><span class=cl> outs<span class=p>(</span><span class=nv>%C</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>i32</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%a</span><span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=nv>%c</span><span class=p>:</span> <span class=k>i32</span><span class=p>)</span> <span class=p>:</span> </span></span><span class=line><span class=cl> <span class=nv>%result</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>unary <span class=nv>%a</span> <span class=p>:</span> <span class=k>f64</span> to <span class=k>i32</span> </span></span><span class=line><span class=cl> <span class=nl>present=</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%x</span><span class=p>:</span> <span class=k>f64</span><span class=p>):</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>yield <span class=nv>%p1</span> <span class=p>:</span> <span class=k>i32</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=nl>absent=</span><span class=p>{</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>yield <span class=nv>%m1</span> <span class=p>:</span> <span class=k>i32</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> linalg<span class=p>.</span>yield <span class=nv>%result</span> <span class=p>:</span> <span class=k>i32</span> </span></span><span class=line><span class=cl><span class=p>}</span> <span class=p>-></span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>i32</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>></span> </span></span></code></pre></div><p>Example showing a structural inversion (existing values become missing in the output, while missing values are filled with 1):</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%c1</span> <span class=p>=</span> arith<span class=p>.</span><span class=kt>constant</span> <span class=m>1</span> <span class=p>:</span> <span class=k>i64</span> </span></span><span class=line><span class=cl><span class=nv>%C</span> <span class=p>=</span> <span class=kt>tensor</span><span class=p>.</span>empty<span class=p>(...)</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>i64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%2</span> <span class=p>=</span> linalg<span class=p>.</span>generic <span class=nv>#trait</span> </span></span><span class=line><span class=cl> ins<span class=p>(</span><span class=nv>%A</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>>)</span> </span></span><span class=line><span class=cl> outs<span class=p>(</span><span class=nv>%C</span><span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>i64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%a</span><span class=p>:</span> <span class=k>f64</span><span class=p>,</span> <span class=nv>%c</span><span class=p>:</span> <span class=k>i64</span><span class=p>)</span> <span class=p>:</span> </span></span><span class=line><span class=cl> <span class=nv>%result</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>unary <span class=nv>%a</span> <span class=p>:</span> <span class=k>f64</span> to <span class=k>i64</span> </span></span><span class=line><span class=cl> <span class=nl>present=</span><span class=p>{}</span> </span></span><span class=line><span class=cl> <span class=nl>absent=</span><span class=p>{</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>yield <span class=nv>%c1</span> <span class=p>:</span> <span class=k>i64</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> linalg<span class=p>.</span>yield <span class=nv>%result</span> <span class=p>:</span> <span class=k>i64</span> </span></span><span class=line><span class=cl><span class=p>}</span> <span class=p>-></span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>i64</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=operands-33>Operands: <a class=headline-hash href=#operands-33>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>x</code></td><td>any type</td></tr></tbody></table><h4 id=results-31>Results: <a class=headline-hash href=#results-31>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>output</code></td><td>any type</td></tr></tbody></table><h3 id=sparse_tensorvalues-sparse_tensortovaluesop><code>sparse_tensor.values</code> (sparse_tensor::ToValuesOp) <a class=headline-hash href=#sparse_tensorvalues-sparse_tensortovaluesop>¶</a></h3><p><em>Extracts numerical values array from a tensor</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.values` $tensor attr-dict `:` type($tensor) `to` type($result) </code></pre><p>Returns the values array of the sparse storage format for the given sparse tensor, independent of the actual dimension. This is similar to the <code>bufferization.to_memref</code> operation in the sense that it provides a bridge between a tensor world view and a bufferized world view. Unlike the <code>bufferization.to_memref</code> operation, however, this sparse operation actually lowers into code that extracts the values array from the sparse storage scheme (either by calling a support library or through direct code).</p><p>Writing into the result of this operation is undefined behavior.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%1</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>values <span class=nv>%0</span> <span class=p>:</span> <span class=kt>tensor</span><span class=p><</span><span class=m>64x64x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> to <span class=kt>memref</span><span class=p><</span><span class=m>?x</span><span class=k>f64</span><span class=p>></span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>InferTypeOpInterface</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=operands-34>Operands: <a class=headline-hash href=#operands-34>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tensor</code></td><td>sparse tensor of any type values</td></tr></tbody></table><h4 id=results-32>Results: <a class=headline-hash href=#results-32>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>non-0-ranked.memref of any type values</td></tr></tbody></table><h3 id=sparse_tensoryield-sparse_tensoryieldop><code>sparse_tensor.yield</code> (sparse_tensor::YieldOp) <a class=headline-hash href=#sparse_tensoryield-sparse_tensoryieldop>¶</a></h3><p><em>Yield from sparse_tensor set-like operations</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `sparse_tensor.yield` $results attr-dict `:` type($results) </code></pre><p>Yields a value from within a <code>binary</code>, <code>unary</code>, <code>reduce</code>, <code>select</code> or <code>foreach</code> block.</p><p>Example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%0</span> <span class=p>=</span> sparse_tensor<span class=p>.</span>unary <span class=nv>%a</span> <span class=p>:</span> <span class=k>i64</span> to <span class=k>i64</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>present=</span><span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%arg0</span><span class=p>:</span> <span class=k>i64</span><span class=p>):</span> </span></span><span class=line><span class=cl> <span class=nv>%cst</span> <span class=p>=</span> arith<span class=p>.</span><span class=kt>constant</span> <span class=m>1</span> <span class=p>:</span> <span class=k>i64</span> </span></span><span class=line><span class=cl> <span class=nv>%ret</span> <span class=p>=</span> arith<span class=p>.</span>addi <span class=nv>%arg0</span><span class=p>,</span> <span class=nv>%cst</span> <span class=p>:</span> <span class=k>i64</span> </span></span><span class=line><span class=cl> sparse_tensor<span class=p>.</span>yield <span class=nv>%ret</span> <span class=p>:</span> <span class=k>i64</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>Traits: <code>AlwaysSpeculatableImplTrait</code>, <code>HasParent<BinaryOp, UnaryOp, ReduceOp, SelectOp, ForeachOp, IterateOp, CoIterateOp></code>, <code>Terminator</code></p><p>Interfaces: <code>ConditionallySpeculatable</code>, <code>NoMemoryEffect (MemoryEffectOpInterface)</code></p><p>Effects: <code>MemoryEffects::Effect{}</code></p><h4 id=operands-35>Operands: <a class=headline-hash href=#operands-35>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>results</code></td><td>variadic of any type</td></tr></tbody></table><h2 id=attributes-17>Attributes <a class=headline-hash href=#attributes-17>¶</a></h2><h3 id=crdtransdirectionkindattr>CrdTransDirectionKindAttr <a class=headline-hash href=#crdtransdirectionkindattr>¶</a></h3><p>sparse tensor coordinate translation direction</p><p>Syntax:</p><pre tabindex=0><code>#sparse_tensor.CrdTransDirection< ::mlir::sparse_tensor::CrdTransDirectionKind # value > </code></pre><p>Enum cases:</p><ul><li>dim_to_lvl (<code>dim2lvl</code>)</li><li>lvl_to_dim (<code>lvl2dim</code>)</li></ul><h4 id=parameters>Parameters: <a class=headline-hash href=#parameters>¶</a></h4><table><thead><tr><th style=text-align:center>Parameter</th><th style=text-align:center>C++ type</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center>value</td><td style=text-align:center><code>::mlir::sparse_tensor::CrdTransDirectionKind</code></td><td>an enum of type CrdTransDirectionKind</td></tr></tbody></table><h3 id=sparsetensordimsliceattr>SparseTensorDimSliceAttr <a class=headline-hash href=#sparsetensordimsliceattr>¶</a></h3><p>An attribute to encode slice information of a sparse tensor on a particular dimension (a tuple of offset, size, stride).</p><h4 id=parameters-1>Parameters: <a class=headline-hash href=#parameters-1>¶</a></h4><table><thead><tr><th style=text-align:center>Parameter</th><th style=text-align:center>C++ type</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center>offset</td><td style=text-align:center><code>int64_t</code></td><td></td></tr><tr><td style=text-align:center>size</td><td style=text-align:center><code>int64_t</code></td><td></td></tr><tr><td style=text-align:center>stride</td><td style=text-align:center><code>int64_t</code></td><td></td></tr></tbody></table><h3 id=sparsetensorencodingattr>SparseTensorEncodingAttr <a class=headline-hash href=#sparsetensorencodingattr>¶</a></h3><p>An attribute to encode information on sparsity properties of tensors, inspired by the TACO formalization of sparse tensors. This encoding is eventually used by a <strong>sparsifier</strong> pass to generate sparse code fully automatically from a sparsity-agnostic representation of the computation, i.e., an implicit sparse representation is converted to an explicit sparse representation where co-iterating loops operate on sparse storage formats rather than tensors with a sparsity encoding. Compiler passes that run before this sparsifier pass need to be aware of the semantics of tensor types with such a sparsity encoding.</p><p>In this encoding, we use <strong>dimension</strong> to refer to the axes of the semantic tensor, and <strong>level</strong> to refer to the axes of the actual storage format, i.e., the operational representation of the sparse tensor in memory. The number of dimensions is usually the same as the number of levels (such as CSR storage format). However, the encoding can also map dimensions to higher-order levels (for example, to encode a block-sparse BSR storage format) or to lower-order levels (for example, to linearize dimensions as a single level in the storage).</p><p>The encoding contains a map that provides the following:</p><ul><li>An ordered sequence of dimension specifications, each of which defines:<ul><li>the dimension-size (implicit from the tensor’s dimension-shape)</li><li>a <strong>dimension-expression</strong></li></ul></li><li>An ordered sequence of level specifications, each of which includes a required <strong>level-type</strong>, which defines how the level should be stored. Each level-type consists of:<ul><li>a <strong>level-expression</strong>, which defines what is stored</li><li>a <strong>level-format</strong></li><li>a collection of <strong>level-properties</strong> that apply to the level-format</li></ul></li></ul><p>Each level-expression is an affine expression over dimension-variables. Thus, the level-expressions collectively define an affine map from dimension-coordinates to level-coordinates. The dimension-expressions collectively define the inverse map, which only needs to be provided for elaborate cases where it cannot be inferred automatically.</p><p>Each dimension could also have an optional <code>SparseTensorDimSliceAttr</code>. Within the sparse storage format, we refer to indices that are stored explicitly as <strong>coordinates</strong> and offsets into the storage format as <strong>positions</strong>.</p><p>The supported level-formats are the following:</p><ul><li><strong>dense</strong> : all entries along this level are stored and linearized.</li><li><strong>batch</strong> : all entries along this level are stored but not linearized.</li><li><strong>compressed</strong> : only nonzeros along this level are stored</li><li><strong>loose_compressed</strong> : as compressed, but allows for free space between regions</li><li><strong>singleton</strong> : a variant of the compressed format, where coordinates have no siblings</li><li><strong>structured[n, m]</strong> : the compression uses a n:m encoding (viz. n out of m consecutive elements are nonzero)</li></ul><p>For a compressed level, each position interval is represented in a compact way with a lowerbound <code>pos(i)</code> and an upperbound <code>pos(i+1) - 1</code>, which implies that successive intervals must appear in order without any “holes” in between them. The loose compressed format relaxes these constraints by representing each position interval with a lowerbound <code>lo(i)</code> and an upperbound <code>hi(i)</code>, which allows intervals to appear in arbitrary order and with elbow room between them.</p><p>By default, each level-type has the property of being unique (no duplicate coordinates at that level) and ordered (coordinates appear sorted at that level). For singleton levels, the coordinates are fused with its parents in AoS (array of structures) scheme. The following properties can be added to a level-format to change this default behavior:</p><ul><li><strong>nonunique</strong> : duplicate coordinates may appear at the level</li><li><strong>nonordered</strong> : coordinates may appear in arbribratry order</li><li><strong>soa</strong> : only applicable to singleton levels, fuses the singleton level in SoA (structure of arrays) scheme.</li></ul><p>In addition to the map, the following fields are optional:</p><ul><li><p>The required bitwidth for position storage (integral offsets into the sparse storage scheme). A narrow width reduces the memory footprint of overhead storage, as long as the width suffices to define the total required range (viz. the maximum number of stored entries over all indirection levels). The choices are <code>8</code>, <code>16</code>, <code>32</code>, <code>64</code>, or, the default, <code>0</code> to indicate the native bitwidth.</p></li><li><p>The required bitwidth for coordinate storage (the coordinates of stored entries). A narrow width reduces the memory footprint of overhead storage, as long as the width suffices to define the total required range (viz. the maximum value of each tensor coordinate over all levels). The choices are <code>8</code>, <code>16</code>, <code>32</code>, <code>64</code>, or, the default, <code>0</code> to indicate a native bitwidth.</p></li><li><p>The explicit value for the sparse tensor. If explicitVal is set, then all the non-zero values in the tensor have the same explicit value. The default value Attribute() indicates that it is not set. This is useful for binary-valued sparse tensors whose values can either be an implicit value (0 by default) or an explicit value (such as 1). In this approach, we don’t store explicit/implicit values, and metadata (such as position and coordinate arrays) alone fully defines the original tensor. This yields additional savings for the storage requirements, as well as for the computational time, since we skip operating on implicit values and can constant fold the explicit values where they are used.</p></li><li><p>The implicit value for the sparse tensor. If implicitVal is set, then the “zero” value in the tensor is equal to the implicit value. For now, we only support <code>0</code> as the implicit value but it could be extended in the future. The default value Attribute() indicates that the implicit value is <code>0</code> (same type as the tensor element type).</p></li></ul><p>Examples:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=c>// Sparse vector. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#SparseVector</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>)</span> <span class=p>-></span> <span class=p>(</span>i <span class=p>:</span> compressed<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#SparseVector</span><span class=p>></span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Sorted coordinate scheme (arranged in AoS format by default). </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#SortedCOO</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>,</span> j<span class=p>)</span> <span class=p>-></span> <span class=p>(</span>i <span class=p>:</span> compressed<span class=p>(</span>nonunique<span class=p>),</span> j <span class=p>:</span> singleton<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=c>// coordinates = {x_crd, y_crd}[nnz] </span></span></span><span class=line><span class=cl><span class=c></span><span class=p>...</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SortedCOO</span><span class=p>></span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Sorted coordinate scheme (arranged in SoA format). </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#SortedCOO</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>,</span> j<span class=p>)</span> <span class=p>-></span> <span class=p>(</span>i <span class=p>:</span> compressed<span class=p>(</span>nonunique<span class=p>),</span> j <span class=p>:</span> singleton<span class=p>(</span>soa<span class=p>))</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=c>// coordinates = {x_crd[nnz], y_crd[nnz]} </span></span></span><span class=line><span class=cl><span class=c></span><span class=p>...</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#SortedCOO</span><span class=p>></span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Batched sorted coordinate scheme, with high encoding. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#BCOO</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>,</span> j<span class=p>,</span> k<span class=p>)</span> <span class=p>-></span> <span class=p>(</span>i <span class=p>:</span> dense<span class=p>,</span> j <span class=p>:</span> compressed<span class=p>(</span>nonunique<span class=p>,</span> high<span class=p>),</span> k <span class=p>:</span> singleton<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=p><</span><span class=m>10x10x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#BCOO</span><span class=p>></span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Compressed sparse row. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#CSR</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>,</span> j<span class=p>)</span> <span class=p>-></span> <span class=p>(</span>i <span class=p>:</span> dense<span class=p>,</span> j <span class=p>:</span> compressed<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=p><</span><span class=m>100x100x</span><span class=k>bf16</span><span class=p>,</span> <span class=nv>#CSR</span><span class=p>></span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Doubly compressed sparse column storage with specific bitwidths. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#DCSC</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>,</span> j<span class=p>)</span> <span class=p>-></span> <span class=p>(</span>j <span class=p>:</span> compressed<span class=p>,</span> i <span class=p>:</span> compressed<span class=p>),</span> </span></span><span class=line><span class=cl> <span class=nl>posWidth =</span> <span class=m>32</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=nl>crdWidth =</span> <span class=m>8</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=p><</span><span class=m>8x8x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#DCSC</span><span class=p>></span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Doubly compressed sparse column storage with specific </span></span></span><span class=line><span class=cl><span class=c>// explicit and implicit values. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#DCSC</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i<span class=p>,</span> j<span class=p>)</span> <span class=p>-></span> <span class=p>(</span>j <span class=p>:</span> compressed<span class=p>,</span> i <span class=p>:</span> compressed<span class=p>),</span> </span></span><span class=line><span class=cl> <span class=nl>explicitVal =</span> <span class=m>1</span> <span class=p>:</span> <span class=k>i64</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=nl>implicitVal =</span> <span class=m>0</span> <span class=p>:</span> <span class=k>i64</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=p><</span><span class=m>8x8x</span><span class=k>i64</span><span class=p>,</span> <span class=nv>#DCSC</span><span class=p>></span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Block sparse row storage (2x3 blocks). </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#BSR</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span> i<span class=p>,</span> j <span class=p>)</span> <span class=p>-></span> </span></span><span class=line><span class=cl> <span class=p>(</span> i floordiv <span class=m>2</span> <span class=p>:</span> dense<span class=p>,</span> </span></span><span class=line><span class=cl> j floordiv <span class=m>3</span> <span class=p>:</span> compressed<span class=p>,</span> </span></span><span class=line><span class=cl> i mod <span class=m>2</span> <span class=p>:</span> dense<span class=p>,</span> </span></span><span class=line><span class=cl> j mod <span class=m>3</span> <span class=p>:</span> dense </span></span><span class=line><span class=cl> <span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=p><</span><span class=m>20x30x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#BSR</span><span class=p>></span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// Same block sparse row storage (2x3 blocks) but this time </span></span></span><span class=line><span class=cl><span class=c>// also with a redundant reverse mapping, which can be inferred. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#BSR_explicit</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>{</span> ib<span class=p>,</span> jb<span class=p>,</span> ii<span class=p>,</span> jj <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=p>(</span> <span class=nl>i =</span> ib <span class=p>*</span> <span class=m>2</span> <span class=err>+</span> ii<span class=p>,</span> </span></span><span class=line><span class=cl> <span class=nl>j =</span> jb <span class=p>*</span> <span class=m>3</span> <span class=err>+</span> jj<span class=p>)</span> <span class=p>-></span> </span></span><span class=line><span class=cl> <span class=p>(</span> <span class=nl>ib =</span> i floordiv <span class=m>2</span> <span class=p>:</span> dense<span class=p>,</span> </span></span><span class=line><span class=cl> <span class=nl>jb =</span> j floordiv <span class=m>3</span> <span class=p>:</span> compressed<span class=p>,</span> </span></span><span class=line><span class=cl> <span class=nl>ii =</span> i mod <span class=m>2</span> <span class=p>:</span> dense<span class=p>,</span> </span></span><span class=line><span class=cl> <span class=nl>jj =</span> j mod <span class=m>3</span> <span class=p>:</span> dense<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=p><</span><span class=m>20x30x</span><span class=k>f32</span><span class=p>,</span> <span class=nv>#BSR_explicit</span><span class=p>></span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// ELL format. </span></span></span><span class=line><span class=cl><span class=c>// In the simple format for matrix, one array stores values and another </span></span></span><span class=line><span class=cl><span class=c>// array stores column indices. The arrays have the same number of rows </span></span></span><span class=line><span class=cl><span class=c>// as the original matrix, but only have as many columns as </span></span></span><span class=line><span class=cl><span class=c>// the maximum number of nonzeros on a row of the original matrix. </span></span></span><span class=line><span class=cl><span class=c>// There are many variants for ELL such as jagged diagonal scheme. </span></span></span><span class=line><span class=cl><span class=c>// To implement ELL, map provides a notion of "counting a </span></span></span><span class=line><span class=cl><span class=c>// dimension", where every stored element with the same coordinate </span></span></span><span class=line><span class=cl><span class=c>// is mapped to a new slice. For instance, ELL storage of a 2-d </span></span></span><span class=line><span class=cl><span class=c>// tensor can be defined with the mapping (i, j) -> (#i, i, j) </span></span></span><span class=line><span class=cl><span class=c>// using the notation of [Chou20]. Lacking the # symbol in MLIR's </span></span></span><span class=line><span class=cl><span class=c>// affine mapping, we use a free symbol c to define such counting, </span></span></span><span class=line><span class=cl><span class=c>// together with a constant that denotes the number of resulting </span></span></span><span class=line><span class=cl><span class=c>// slices. For example, the mapping [c](i, j) -> (c * 3 * i, i, j) </span></span></span><span class=line><span class=cl><span class=c>// with the level-types ["dense", "dense", "compressed"] denotes ELL </span></span></span><span class=line><span class=cl><span class=c>// storage with three jagged diagonals that count the dimension i. </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#ELL</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>[</span>c<span class=p>](</span>i<span class=p>,</span> j<span class=p>)</span> <span class=p>-></span> <span class=p>(</span>c <span class=p>*</span> <span class=m>3</span> <span class=p>*</span> i <span class=p>:</span> dense<span class=p>,</span> i <span class=p>:</span> dense<span class=p>,</span> j <span class=p>:</span> compressed<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#ELL</span><span class=p>></span> <span class=p>...</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=c>// CSR slice (offset = 0, size = 4, stride = 1 on the first dimension; </span></span></span><span class=line><span class=cl><span class=c>// offset = 0, size = 8, and a dynamic stride on the second dimension). </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>#CSR_SLICE</span> <span class=p>=</span> <span class=nv>#sparse_tensor.encoding</span><span class=p><{</span> </span></span><span class=line><span class=cl> <span class=nl>map =</span> <span class=p>(</span>i <span class=p>:</span> <span class=nv>#sparse_tensor</span><span class=p><</span>slice<span class=p>(</span><span class=m>0</span><span class=p>,</span> <span class=m>4</span><span class=p>,</span> <span class=m>1</span><span class=p>)>,</span> </span></span><span class=line><span class=cl> j <span class=p>:</span> <span class=nv>#sparse_tensor</span><span class=p><</span>slice<span class=p>(</span><span class=m>0</span><span class=p>,</span> <span class=m>8</span><span class=p>,</span> <span class=err>?</span><span class=p>)>)</span> <span class=p>-></span> </span></span><span class=line><span class=cl> <span class=p>(</span>i <span class=p>:</span> dense<span class=p>,</span> j <span class=p>:</span> compressed<span class=p>)</span> </span></span><span class=line><span class=cl><span class=p>}></span> </span></span><span class=line><span class=cl><span class=p>...</span> <span class=kt>tensor</span><span class=p><</span><span class=m>?x?x</span><span class=k>f64</span><span class=p>,</span> <span class=nv>#CSR_SLICE</span><span class=p>></span> <span class=p>...</span> </span></span></code></pre></div><h4 id=parameters-2>Parameters: <a class=headline-hash href=#parameters-2>¶</a></h4><table><thead><tr><th style=text-align:center>Parameter</th><th style=text-align:center>C++ type</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center>lvlTypes</td><td style=text-align:center><code>::llvm::ArrayRef<::mlir::sparse_tensor::LevelType></code></td><td>level-types</td></tr><tr><td style=text-align:center>dimToLvl</td><td style=text-align:center><code>AffineMap</code></td><td></td></tr><tr><td style=text-align:center>lvlToDim</td><td style=text-align:center><code>AffineMap</code></td><td></td></tr><tr><td style=text-align:center>posWidth</td><td style=text-align:center><code>unsigned</code></td><td></td></tr><tr><td style=text-align:center>crdWidth</td><td style=text-align:center><code>unsigned</code></td><td></td></tr><tr><td style=text-align:center>explicitVal</td><td style=text-align:center><code>::mlir::Attribute</code></td><td></td></tr><tr><td style=text-align:center>implicitVal</td><td style=text-align:center><code>::mlir::Attribute</code></td><td></td></tr><tr><td style=text-align:center>dimSlices</td><td style=text-align:center><code>::llvm::ArrayRef<::mlir::sparse_tensor::SparseTensorDimSliceAttr></code></td><td>per dimension slice metadata</td></tr></tbody></table><h3 id=sparsetensorsortkindattr>SparseTensorSortKindAttr <a class=headline-hash href=#sparsetensorsortkindattr>¶</a></h3><p>sparse tensor sort algorithm</p><p>Syntax:</p><pre tabindex=0><code>#sparse_tensor.SparseTensorSortAlgorithm< ::mlir::sparse_tensor::SparseTensorSortKind # value > </code></pre><p>Enum cases:</p><ul><li>hybrid_quick_sort (<code>HybridQuickSort</code>)</li><li>insertion_sort_stable (<code>InsertionSortStable</code>)</li><li>quick_sort (<code>QuickSort</code>)</li><li>heap_sort (<code>HeapSort</code>)</li></ul><h4 id=parameters-3>Parameters: <a class=headline-hash href=#parameters-3>¶</a></h4><table><thead><tr><th style=text-align:center>Parameter</th><th style=text-align:center>C++ type</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center>value</td><td style=text-align:center><code>::mlir::sparse_tensor::SparseTensorSortKind</code></td><td>an enum of type SparseTensorSortKind</td></tr></tbody></table><h3 id=storagespecifierkindattr>StorageSpecifierKindAttr <a class=headline-hash href=#storagespecifierkindattr>¶</a></h3><p>sparse tensor storage specifier kind</p><p>Syntax:</p><pre tabindex=0><code>#sparse_tensor.kind< ::mlir::sparse_tensor::StorageSpecifierKind # value > </code></pre><p>Enum cases:</p><ul><li>lvl_sz (<code>LvlSize</code>)</li><li>pos_mem_sz (<code>PosMemSize</code>)</li><li>crd_mem_sz (<code>CrdMemSize</code>)</li><li>val_mem_sz (<code>ValMemSize</code>)</li><li>dim_offset (<code>DimOffset</code>)</li><li>dim_stride (<code>DimStride</code>)</li></ul><h4 id=parameters-4>Parameters: <a class=headline-hash href=#parameters-4>¶</a></h4><table><thead><tr><th style=text-align:center>Parameter</th><th style=text-align:center>C++ type</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center>value</td><td style=text-align:center><code>::mlir::sparse_tensor::StorageSpecifierKind</code></td><td>an enum of type StorageSpecifierKind</td></tr></tbody></table><h2 id=types>Types <a class=headline-hash href=#types>¶</a></h2><h3 id=iterspacetype>IterSpaceType <a class=headline-hash href=#iterspacetype>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>!sparse_tensor.iter_space< ::mlir::sparse_tensor::SparseTensorEncodingAttr, # encoding Level, # loLvl Level # hiLvl > </code></pre><p>A sparse iteration space that represents an abstract N-D (sparse) iteration space extracted from a sparse tensor, i.e., a set of (crd_0, crd_1, …, crd_N) for every stored element (usually nonzeros) in a sparse tensor between the specified [$loLvl, $hiLvl) levels.</p><p>Examples:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=c>// An iteration space extracted from a CSR tensor between levels [0, 2). </span></span></span><span class=line><span class=cl><span class=c></span><span class=p>!</span>iter_space<span class=p><</span><span class=nv>#CSR</span><span class=p>,</span> <span class=nl>lvls =</span> <span class=m>0</span> to <span class=m>2</span><span class=p>></span> </span></span></code></pre></div><h4 id=parameters-5>Parameters: <a class=headline-hash href=#parameters-5>¶</a></h4><table><thead><tr><th style=text-align:center>Parameter</th><th style=text-align:center>C++ type</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center>encoding</td><td style=text-align:center><code>::mlir::sparse_tensor::SparseTensorEncodingAttr</code></td><td></td></tr><tr><td style=text-align:center>loLvl</td><td style=text-align:center><code>Level</code></td><td></td></tr><tr><td style=text-align:center>hiLvl</td><td style=text-align:center><code>Level</code></td><td></td></tr></tbody></table><h3 id=iteratortype>IteratorType <a class=headline-hash href=#iteratortype>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>!sparse_tensor.iterator< ::mlir::sparse_tensor::SparseTensorEncodingAttr, # encoding Level, # loLvl Level # hiLvl > </code></pre><p>An iterator that points to the current element in the corresponding iteration space.</p><p>Examples:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=c>// An iterator that iterates over a iteration space of type `!iter_space<#CSR, lvls = 0 to 2>` </span></span></span><span class=line><span class=cl><span class=c></span><span class=p>!</span>iterator<span class=p><</span><span class=nv>#CSR</span><span class=p>,</span> <span class=nl>lvls =</span> <span class=m>0</span> to <span class=m>2</span><span class=p>></span> </span></span></code></pre></div><h4 id=parameters-6>Parameters: <a class=headline-hash href=#parameters-6>¶</a></h4><table><thead><tr><th style=text-align:center>Parameter</th><th style=text-align:center>C++ type</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center>encoding</td><td style=text-align:center><code>::mlir::sparse_tensor::SparseTensorEncodingAttr</code></td><td></td></tr><tr><td style=text-align:center>loLvl</td><td style=text-align:center><code>Level</code></td><td></td></tr><tr><td style=text-align:center>hiLvl</td><td style=text-align:center><code>Level</code></td><td></td></tr></tbody></table><h3 id=storagespecifiertype>StorageSpecifierType <a class=headline-hash href=#storagespecifiertype>¶</a></h3><p>Structured metadata for sparse tensor low-level storage scheme</p><p>Syntax:</p><pre tabindex=0><code>!sparse_tensor.storage_specifier< ::mlir::sparse_tensor::SparseTensorEncodingAttr # encoding > </code></pre><p>Values with storage_specifier types represent aggregated storage scheme metadata for the given sparse tensor encoding. It currently holds a set of values for level-sizes, coordinate arrays, position arrays, and value array. Note that the type is not yet stable and subject to change in the near future.</p><p>Examples:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=c>// A storage specifier that can be used to store storage scheme metadata from CSR matrix. </span></span></span><span class=line><span class=cl><span class=c></span><span class=p>!</span>storage_specifier<span class=p><</span><span class=nv>#CSR</span><span class=p>></span> </span></span></code></pre></div><h4 id=parameters-7>Parameters: <a class=headline-hash href=#parameters-7>¶</a></h4><table><thead><tr><th style=text-align:center>Parameter</th><th style=text-align:center>C++ type</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center>encoding</td><td style=text-align:center><code>::mlir::sparse_tensor::SparseTensorEncodingAttr</code></td><td></td></tr></tbody></table><h2 id=enums>Enums <a class=headline-hash href=#enums>¶</a></h2><h3 id=crdtransdirectionkind>CrdTransDirectionKind <a class=headline-hash href=#crdtransdirectionkind>¶</a></h3><p>sparse tensor coordinate translation direction</p><h4 id=cases>Cases: <a class=headline-hash href=#cases>¶</a></h4><table><thead><tr><th style=text-align:center>Symbol</th><th style=text-align:center>Value</th><th>String</th></tr></thead><tbody><tr><td style=text-align:center>dim2lvl</td><td style=text-align:center><code>0</code></td><td>dim_to_lvl</td></tr><tr><td style=text-align:center>lvl2dim</td><td style=text-align:center><code>1</code></td><td>lvl_to_dim</td></tr></tbody></table><h3 id=sparsetensorsortkind>SparseTensorSortKind <a class=headline-hash href=#sparsetensorsortkind>¶</a></h3><p>sparse tensor sort algorithm</p><h4 id=cases-1>Cases: <a class=headline-hash href=#cases-1>¶</a></h4><table><thead><tr><th style=text-align:center>Symbol</th><th style=text-align:center>Value</th><th>String</th></tr></thead><tbody><tr><td style=text-align:center>HybridQuickSort</td><td style=text-align:center><code>0</code></td><td>hybrid_quick_sort</td></tr><tr><td style=text-align:center>InsertionSortStable</td><td style=text-align:center><code>1</code></td><td>insertion_sort_stable</td></tr><tr><td style=text-align:center>QuickSort</td><td style=text-align:center><code>2</code></td><td>quick_sort</td></tr><tr><td style=text-align:center>HeapSort</td><td style=text-align:center><code>3</code></td><td>heap_sort</td></tr></tbody></table><h3 id=storagespecifierkind>StorageSpecifierKind <a class=headline-hash href=#storagespecifierkind>¶</a></h3><p>sparse tensor storage specifier kind</p><h4 id=cases-2>Cases: <a class=headline-hash href=#cases-2>¶</a></h4><table><thead><tr><th style=text-align:center>Symbol</th><th style=text-align:center>Value</th><th>String</th></tr></thead><tbody><tr><td style=text-align:center>LvlSize</td><td style=text-align:center><code>0</code></td><td>lvl_sz</td></tr><tr><td style=text-align:center>PosMemSize</td><td style=text-align:center><code>1</code></td><td>pos_mem_sz</td></tr><tr><td style=text-align:center>CrdMemSize</td><td style=text-align:center><code>2</code></td><td>crd_mem_sz</td></tr><tr><td style=text-align:center>ValMemSize</td><td style=text-align:center><code>3</code></td><td>val_mem_sz</td></tr><tr><td style=text-align:center>DimOffset</td><td style=text-align:center><code>4</code></td><td>dim_offset</td></tr><tr><td style=text-align:center>DimStride</td><td style=text-align:center><code>5</code></td><td>dim_stride</td></tr></tbody></table><div class=edit-meta><br></div><nav class=pagination><a class="nav nav-prev" href=https://mlir.llvm.org/docs/Dialects/ShapeDialect/ title="'shape' Dialect"><i class="fas fa-arrow-left" aria-hidden=true></i> Prev - 'shape' Dialect</a> <a class="nav nav-next" href=https://mlir.llvm.org/docs/Dialects/TensorOps/ title="'tensor' Dialect">Next - 'tensor' Dialect <i class="fas fa-arrow-right" aria-hidden=true></i></a></nav><footer><p class=powered>Powered by <a href=https://gohugo.io>Hugo</a>. Theme by <a href=https://themes.gohugo.io/hugo-theme-techdoc/>TechDoc</a>. Designed by <a href=https://github.com/thingsym/hugo-theme-techdoc>Thingsym</a>.</p></footer></main><div class=sidebar><nav class=slide-menu><ul><li><a href=https://mlir.llvm.org/>Home</a></li><li><a href=https://mlir.llvm.org/users/>Users of MLIR</a></li><li><a href=https://mlir.llvm.org/pubs/>MLIR Related Publications</a></li><li><a href=https://mlir.llvm.org/talks/>Talks</a></li><li><a href=https://mlir.llvm.org/deprecation/>Deprecations & Current Refactoring</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/getting_started/>Getting Started<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/getting_started/ReportingIssues/>Reporting Issues</a></li><li><a href=https://mlir.llvm.org/getting_started/Debugging/>Debugging Tips</a></li><li><a href=https://mlir.llvm.org/getting_started/Faq/>FAQ</a></li><li><a href=https://mlir.llvm.org/getting_started/Contributing/>How to Contribute</a></li><li><a href=https://mlir.llvm.org/getting_started/DeveloperGuide/>Developer Guide</a></li><li><a href=https://mlir.llvm.org/getting_started/openprojects/>Open Projects</a></li><li><a href=https://mlir.llvm.org/getting_started/Glossary/>Glossary</a></li><li><a href=https://mlir.llvm.org/getting_started/TestingGuide/>Testing Guide</a></li></ul></li><li class="parent has-sub-menu"><a href=https://mlir.llvm.org/docs/>Code Documentation<span class="mark opened">-</span></a><ul class=sub-menu><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Bindings/>Bindings<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Bindings/Python/>MLIR Python Bindings</a></li></ul></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Tools/>Tools<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tools/MLIRLSP/>MLIR : Language Server Protocol</a></li><li><a href=https://mlir.llvm.org/docs/Tools/mlir-reduce/>MLIR Reduce</a></li><li><a href=https://mlir.llvm.org/docs/Tools/mlir-rewrite/>mlir-rewrite</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/QuantPasses/></a></li><li><a href=https://mlir.llvm.org/docs/ActionTracing/>Action: Tracing and Debugging MLIR-based Compilers</a></li><li><a href=https://mlir.llvm.org/docs/BufferDeallocationInternals/>Buffer Deallocation - Internals</a></li><li><a href=https://mlir.llvm.org/docs/Bufferization/>Bufferization</a></li><li><a href=https://mlir.llvm.org/docs/DataLayout/>Data Layout Modeling</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/DefiningDialects/>Defining Dialects<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/DefiningDialects/Constraints/>Constraints</a></li><li><a href=https://mlir.llvm.org/docs/DefiningDialects/AttributesAndTypes/>Defining Dialect Attributes and Types</a></li><li><a href=https://mlir.llvm.org/docs/DefiningDialects/Operations/>Operation Definition Specification (ODS)</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Diagnostics/>Diagnostic Infrastructure</a></li><li><a href=https://mlir.llvm.org/docs/DialectConversion/>Dialect Conversion</a></li><li class="parent has-sub-menu"><a href=https://mlir.llvm.org/docs/Dialects/>Dialects<span class="mark opened">-</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Dialects/DLTITransformOps/></a></li><li><a href=https://mlir.llvm.org/docs/Dialects/OpenACCDialect/>'acc' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Affine/>'affine' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/AMDGPU/>'amdgpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/AMX/>'amx' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArithOps/>'arith' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArmNeon/>'arm_neon' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArmSVE/>'arm_sve' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArmSME/>'ArmSME' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/AsyncDialect/>'async' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/BufferizationOps/>'bufferization' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ControlFlowDialect/>'cf' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ComplexOps/>'complex' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/DLTIDialect/>'dlti' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/EmitC/>'emitc' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Func/>'func' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/GPU/>'gpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/IndexOps/>'index' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/IRDL/>'irdl' Dialect</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Dialects/Linalg/>'linalg' Dialect<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Dialects/Linalg/OpDSL/>Linalg OpDSL</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Dialects/LLVM/>'llvm' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MathOps/>'math' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MemRef/>'memref' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Mesh/>'mesh' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MLProgramOps/>'ml_program' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MPI/>'mpi' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/NVGPU/>'nvgpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/NVVMDialect/>'nvvm' Dialect</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Dialects/OpenMPDialect/>'omp' Dialect<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Dialects/OpenMPDialect/ODS/>ODS Documentation</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Dialects/PDLInterpOps/>'pdl_interp' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/PDLOps/>'pdl' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/PolynomialDialect/>'polynomial' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/PtrOps/>'ptr' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/QuantDialect/>'quant' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ROCDLDialect/>'rocdl' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/SCFDialect/>'scf' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ShapeDialect/>'shape' Dialect</a></li><li class=active><a href=https://mlir.llvm.org/docs/Dialects/SparseTensorOps/>'sparse_tensor' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/TensorOps/>'tensor' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/UBOps/>'ub' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/VCIXDialect/>'vcix' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Vector/>'vector' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/X86Vector/>'x86vector' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/XeGPU/>'xegpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Builtin/>Builtin Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MatchOpInterfaces/>OpInterface definitions</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/SPIR-V/>SPIR-V Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/TOSA/>Tensor Operator Set Architecture (TOSA) Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Transform/>Transform Dialect</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Interfaces/>Interfaces</a></li><li><a href=https://mlir.llvm.org/docs/TargetLLVMIR/>LLVM IR Target</a></li><li><a href=https://mlir.llvm.org/docs/BytecodeFormat/>MLIR Bytecode Format</a></li><li><a href=https://mlir.llvm.org/docs/CAPI/>MLIR C API</a></li><li><a href=https://mlir.llvm.org/docs/LangRef/>MLIR Language Reference</a></li><li><a href=https://mlir.llvm.org/docs/ReleaseNotes/>MLIR Release Notes</a></li><li><a href=https://mlir.llvm.org/docs/Canonicalization/>Operation Canonicalization</a></li><li><a href=https://mlir.llvm.org/docs/OwnershipBasedBufferDeallocation/>Ownership-based Buffer Deallocation</a></li><li><a href=https://mlir.llvm.org/docs/PassManagement/>Pass Infrastructure</a></li><li><a href=https://mlir.llvm.org/docs/Passes/>Passes</a></li><li><a href=https://mlir.llvm.org/docs/PatternRewriter/>Pattern Rewriting : Generic DAG-to-DAG Rewriting</a></li><li><a href=https://mlir.llvm.org/docs/PDLL/>PDLL - PDL Language</a></li><li><a href=https://mlir.llvm.org/docs/Quantization/>Quantization</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Rationale/>Rationale<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Rationale/RationaleGenericDAGRewriter/>Generic DAG Rewriter Infrastructure Rationale</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/RationaleLinalgDialect/>Linalg Dialect Rationale: The Case For Compiler-Friendly Custom Operations</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/Rationale/>MLIR Rationale</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/MLIRForGraphAlgorithms/>MLIR: Incremental Application to Graph Algorithms in ML Frameworks</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/RationaleSimplifiedPolyhedralForm/>MLIR: The case for a simplified polyhedral form</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/SideEffectsAndSpeculation/>Side Effects & Speculation</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/UsageOfConst/>Usage of 'const' in MLIR, for core IR types</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/ShapeInference/>Shape Inference</a></li><li><a href=https://mlir.llvm.org/docs/SPIRVToLLVMDialectConversion/>SPIR-V Dialect to LLVM Dialect conversion manual</a></li><li><a href=https://mlir.llvm.org/docs/SymbolsAndSymbolTables/>Symbols and Symbol Tables</a></li><li><a href=https://mlir.llvm.org/docs/DeclarativeRewrites/>Table-driven Declarative Rewrite Rule (DRR)</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Traits/>Traits<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Traits/Broadcastable/>The `Broadcastable` Trait</a></li></ul></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Tutorials/>Tutorials<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tutorials/CreatingADialect/>Creating a Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/QuickstartRewrites/>Quickstart tutorial to adding MLIR graph rewrite</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Tutorials/Toy/>Toy Tutorial<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-1/>Chapter 1: Toy Language and AST</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-2/>Chapter 2: Emitting Basic MLIR</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-3/>Chapter 3: High-level Language-Specific Analysis and Transformation</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-4/>Chapter 4: Enabling Generic Transformation with Interfaces</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-5/>Chapter 5: Partial Lowering to Lower-Level Dialects for Optimization</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-6/>Chapter 6: Lowering to LLVM and CodeGeneration</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-7/>Chapter 7: Adding a Composite Type to Toy</a></li></ul></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Tutorials/transform/>Transform Dialect Tutorial<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch0/>Chapter 0: A Primer on “Structured” Linalg Operations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch1/>Chapter 1: Combining Existing Transformations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch2/>Chapter 2: Adding a Simple New Transformation Operation</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch3/>Chapter 3: More than Simple Transform Operations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch4/>Chapter 4: Matching Payload with Transform Operations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/ChH/>Chapter H: Reproducing Halide Schedule</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Tutorials/UnderstandingTheIRStructure/>Understanding the IR Structure</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/MlirOpt/>Using `mlir-opt`</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/DataFlowAnalysis/>Writing DataFlow Analyses in MLIR</a></li></ul></li></ul></li></ul></nav><div class=sidebar-footer></div></div></div><a href=# id=backtothetop-fixed class=backtothetop data-backtothetop-duration=600 data-backtothetop-easing=easeOutQuart data-backtothetop-fixed-fadein=1000 data-backtothetop-fixed-fadeout=1000 data-backtothetop-fixed-bottom=10 data-backtothetop-fixed-right=20><span class="fa-layers fa-fw"><i class="fas fa-circle"></i> <i class="fas fa-arrow-circle-up"></i></span></a></div></body></html>