CINXE.COM
Transform Dialect - MLIR
<!doctype html><html lang=en-us><head><meta charset=utf-8><meta http-equiv=x-ua-compatible content="IE=edge"><meta name=viewport content="width=device-width,initial-scale=1,maximum-scale=1,user-scalable=no"><title>Transform Dialect - MLIR</title><meta name=description content="Multi-Level IR Compiler Framework"><meta name=generator content="Hugo 0.119.0"><link href=https://mlir.llvm.org/index.xml rel=alternate type=application/rss+xml><link rel=canonical href=https://mlir.llvm.org/docs/Dialects/Transform/><link rel=stylesheet href=https://mlir.llvm.org/css/theme.css><script src=https://use.fontawesome.com/releases/v5.0.6/js/all.js></script> <link rel=stylesheet href=https://mlir.llvm.org/css/chroma.min.css><script src=https://cdn.jsdelivr.net/npm/jquery@3.3.1/dist/jquery.min.js></script> <script src=https://cdn.jsdelivr.net/npm/jquery.easing@1.4.1/jquery.easing.min.js></script> <script src=https://mlir.llvm.org/js/bundle.js></script> <script type=text/javascript src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> <script type=text/x-mathjax-config> MathJax.Hub.Config({ tex2jax: { inlineMath: [['$', '$'] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ] } }); </script><link rel=apple-touch-icon sizes=180x180 href="/apple-touch-icon.png?v=1"><link rel=icon type=image/png sizes=32x32 href="/favicon-32x32.png?v=1"><link rel=icon type=image/png sizes=16x16 href="/favicon-16x16.png?v=1"><link rel=manifest href="/site.webmanifest?v=1"><link rel=mask-icon href="/safari-pinned-tab.svg?v=1" color=#3775e0><link rel="shortcut icon" href="/favicon.ico?v=1"><meta name=msapplication-TileColor content="#2d89ef"><meta name=theme-color content="#ffffff"><link rel=icon href=/favicon.svg type=image/svg+xml sizes=any><style>:root{}</style></head><body><div class=container><header><h1><div><img src=https://mlir.llvm.org//mlir-logo.png width=40px align=absmiddle> MLIR</div></h1><p class=description>Multi-Level IR Compiler Framework</p></header><div class=global-menu><nav><ul><li class=parent><a href>Community<i class="fas fa-angle-right"></i></a><ul class=sub-menu><li class=child><a href=https://llvm.discourse.group/c/mlir/31>Forums</a></li><li class=child><a href=https://discord.gg/xS7Z362>Chat</a></li></ul></li><li><a href=/getting_started/Debugging/>Debugging Tips</a></li><li><a href=/getting_started/Faq/>FAQ</a></li><li class=parent><a href=https://github.com/llvm/llvm-project/tree/main/mlir>Source<i class="fas fa-angle-right"></i></a><ul class=sub-menu><li class=child><a href=/doxygen/>Doxygen</a></li><li class=child><a href=https://github.com/llvm/llvm-project/tree/main/mlir>GitHub</a></li></ul></li><li><a href="https://bugs.llvm.org/buglist.cgi?bug_status=__open__&list_id=177877&order=changeddate%20DESC%2Cpriority%2Cbug_severity&product=MLIR&query_format=specific">Bugs</a></li><li><a href=https://github.com/llvm/mlir-www/tree/main/website/static/LogoAssets>Logo Assets</a></li><li><a href=https://www.youtube.com/MLIRCompiler>Youtube Channel</a></li></ul></nav></div><div class=content-container><main><h1>Transform Dialect</h1><p>Fine-grain transformation control dialect. See <a href=/docs/Tutorials/transform/>tutorial</a> for more introductory information.</p><p><nav id=TableOfContents><ul><li><a href=#overview>Overview</a></li><li><a href=#dialect-extension-mechanism>Dialect Extension Mechanism</a></li><li><a href=#side-effects>Side Effects</a></li><li><a href=#execution-model>Execution Model</a></li><li><a href=#handle-invalidation>Handle Invalidation</a></li><li><a href=#intended-use-and-integrations>Intended Use and Integrations</a></li><li><a href=#effects-on-the-infrastructure>Effects on the Infrastructure</a></li><li><a href=#type-definitions>Type Definitions</a><ul><li><a href=#affinemapparamtype>AffineMapParamType</a></li><li><a href=#anyoptype>AnyOpType</a></li><li><a href=#anyparamtype>AnyParamType</a></li><li><a href=#anyvaluetype>AnyValueType</a></li><li><a href=#operationtype>OperationType</a></li><li><a href=#paramtype>ParamType</a></li><li><a href=#typeparamtype>TypeParamType</a></li></ul></li><li><a href=#core-operations>Core Operations</a><ul><li><a href=#transformalternatives-transformalternativesop><code>transform.alternatives</code> (transform::AlternativesOp)</a></li><li><a href=#transformannotate-transformannotateop><code>transform.annotate</code> (transform::AnnotateOp)</a></li><li><a href=#transformapply_patternscanonicalization-transformapplycanonicalizationpatternsop><code>transform.apply_patterns.canonicalization</code> (transform::ApplyCanonicalizationPatternsOp)</a></li><li><a href=#transformapply_cse-transformapplycommonsubexpressioneliminationop><code>transform.apply_cse</code> (transform::ApplyCommonSubexpressionEliminationOp)</a></li><li><a href=#transformapply_conversion_patterns-transformapplyconversionpatternsop><code>transform.apply_conversion_patterns</code> (transform::ApplyConversionPatternsOp)</a></li><li><a href=#transformapply_dce-transformapplydeadcodeeliminationop><code>transform.apply_dce</code> (transform::ApplyDeadCodeEliminationOp)</a></li><li><a href=#transformapply_licm-transformapplyloopinvariantcodemotionop><code>transform.apply_licm</code> (transform::ApplyLoopInvariantCodeMotionOp)</a></li><li><a href=#transformapply_patterns-transformapplypatternsop><code>transform.apply_patterns</code> (transform::ApplyPatternsOp)</a></li><li><a href=#transformapply_registered_pass-transformapplyregisteredpassop><code>transform.apply_registered_pass</code> (transform::ApplyRegisteredPassOp)</a></li><li><a href=#transformapply_conversion_patternsdialect_to_llvm-transformapplytollvmconversionpatternsop><code>transform.apply_conversion_patterns.dialect_to_llvm</code> (transform::ApplyToLLVMConversionPatternsOp)</a></li><li><a href=#transformcast-transformcastop><code>transform.cast</code> (transform::CastOp)</a></li><li><a href=#transformcollect_matching-transformcollectmatchingop><code>transform.collect_matching</code> (transform::CollectMatchingOp)</a></li><li><a href=#transformforeach_match-transformforeachmatchop><code>transform.foreach_match</code> (transform::ForeachMatchOp)</a></li><li><a href=#transformforeach-transformforeachop><code>transform.foreach</code> (transform::ForeachOp)</a></li><li><a href=#transformget_consumers_of_result-transformgetconsumersofresult><code>transform.get_consumers_of_result</code> (transform::GetConsumersOfResult)</a></li><li><a href=#transformget_defining_op-transformgetdefiningop><code>transform.get_defining_op</code> (transform::GetDefiningOp)</a></li><li><a href=#transformget_operand-transformgetoperandop><code>transform.get_operand</code> (transform::GetOperandOp)</a></li><li><a href=#transformget_parent_op-transformgetparentop><code>transform.get_parent_op</code> (transform::GetParentOp)</a></li><li><a href=#transformget_producer_of_operand-transformgetproducerofoperand><code>transform.get_producer_of_operand</code> (transform::GetProducerOfOperand)</a></li><li><a href=#transformget_result-transformgetresultop><code>transform.get_result</code> (transform::GetResultOp)</a></li><li><a href=#transformget_type-transformgettypeop><code>transform.get_type</code> (transform::GetTypeOp)</a></li><li><a href=#transforminclude-transformincludeop><code>transform.include</code> (transform::IncludeOp)</a></li><li><a href=#transformmatchoperation_empty-transformmatchoperationemptyop><code>transform.match.operation_empty</code> (transform::MatchOperationEmptyOp)</a></li><li><a href=#transformmatchoperation_name-transformmatchoperationnameop><code>transform.match.operation_name</code> (transform::MatchOperationNameOp)</a></li><li><a href=#transformmatchparamcmpi-transformmatchparamcmpiop><code>transform.match.param.cmpi</code> (transform::MatchParamCmpIOp)</a></li><li><a href=#transformmerge_handles-transformmergehandlesop><code>transform.merge_handles</code> (transform::MergeHandlesOp)</a></li><li><a href=#transformnamed_sequence-transformnamedsequenceop><code>transform.named_sequence</code> (transform::NamedSequenceOp)</a></li><li><a href=#transformnum_associations-transformnumassociationsop><code>transform.num_associations</code> (transform::NumAssociationsOp)</a></li><li><a href=#transformparamconstant-transformparamconstantop><code>transform.param.constant</code> (transform::ParamConstantOp)</a></li><li><a href=#transformprint-transformprintop><code>transform.print</code> (transform::PrintOp)</a></li><li><a href=#transformreplicate-transformreplicateop><code>transform.replicate</code> (transform::ReplicateOp)</a></li><li><a href=#transformselect-transformselectop><code>transform.select</code> (transform::SelectOp)</a></li><li><a href=#transformsequence-transformsequenceop><code>transform.sequence</code> (transform::SequenceOp)</a></li><li><a href=#transformsplit_handle-transformsplithandleop><code>transform.split_handle</code> (transform::SplitHandleOp)</a></li><li><a href=#transformverify-transformverifyop><code>transform.verify</code> (transform::VerifyOp)</a></li><li><a href=#transformyield-transformyieldop><code>transform.yield</code> (transform::YieldOp)</a></li></ul></li><li><a href=#affine-transform-operations>Affine Transform Operations</a><ul><li><a href=#transformaffinesimplify_bounded_affine_ops-transformsimplifyboundedaffineopsop><code>transform.affine.simplify_bounded_affine_ops</code> (transform::SimplifyBoundedAffineOpsOp)</a></li></ul></li><li><a href=#bufferization-transform-operations>Bufferization Transform Operations</a><ul><li><a href=#transformbufferizationbuffer_loop_hoisting-transformbufferloophoistingop><code>transform.bufferization.buffer_loop_hoisting</code> (transform::BufferLoopHoistingOp)</a></li><li><a href=#transformbufferizationeliminate_empty_tensors-transformeliminateemptytensorsop><code>transform.bufferization.eliminate_empty_tensors</code> (transform::EliminateEmptyTensorsOp)</a></li><li><a href=#transformbufferizationempty_tensor_to_alloc_tensor-transformemptytensortoalloctensorop><code>transform.bufferization.empty_tensor_to_alloc_tensor</code> (transform::EmptyTensorToAllocTensorOp)</a></li><li><a href=#transformbufferizationone_shot_bufferize-transformoneshotbufferizeop><code>transform.bufferization.one_shot_bufferize</code> (transform::OneShotBufferizeOp)</a></li></ul></li><li><a href=#debug-transform-operations>Debug Transform Operations</a><ul><li><a href=#transformdebugemit_param_as_remark-transformdebugemitparamasremarkop><code>transform.debug.emit_param_as_remark</code> (transform::DebugEmitParamAsRemarkOp)</a></li><li><a href=#transformdebugemit_remark_at-transformdebugemitremarkatop><code>transform.debug.emit_remark_at</code> (transform::DebugEmitRemarkAtOp)</a></li></ul></li><li><a href=#irdl-extension-transform-operations>IRDL (extension) Transform Operations</a><ul><li><a href=#transformirdlcollect_matching-transformirdlcollectmatchingop><code>transform.irdl.collect_matching</code> (transform::IRDLCollectMatchingOp)</a></li></ul></li><li><a href=#func-transform-operations>Func Transform Operations</a><ul><li><a href=#transformapply_conversion_patternsfuncfunc_to_llvm-transformapplyfunctollvmconversionpatternsop><code>transform.apply_conversion_patterns.func.func_to_llvm</code> (transform::ApplyFuncToLLVMConversionPatternsOp)</a></li><li><a href=#transformfunccast_and_call-transformcastandcallop><code>transform.func.cast_and_call</code> (transform::CastAndCallOp)</a></li></ul></li><li><a href=#gpu-transform-operations>GPU Transform Operations</a><ul><li><a href=#transformapply_patternsgpugpu_rewrite_patterns-transformapplygpurewritepatternsop><code>transform.apply_patterns.gpu.gpu_rewrite_patterns</code> (transform::ApplyGPURewritePatternsOp)</a></li><li><a href=#transformapply_conversion_patternsgpugpu_subgroup_reduce_to_nvvm-transformapplygpusubgroupreducetonvvmconversionpatternsop><code>transform.apply_conversion_patterns.gpu.gpu_subgroup_reduce_to_nvvm</code> (transform::ApplyGPUSubgroupReduceToNVVMConversionPatternsOp)</a></li><li><a href=#transformapply_conversion_patternsgpugpu_to_nvvm-transformapplygputonvvmconversionpatternsop><code>transform.apply_conversion_patterns.gpu.gpu_to_nvvm</code> (transform::ApplyGPUToNVVMConversionPatternsOp)</a></li><li><a href=#transformapply_conversion_patternsgpugpu_wmma_to_nvvm-transformapplygpuwwmatonvvmconversionpatternsop><code>transform.apply_conversion_patterns.gpu.gpu_wmma_to_nvvm</code> (transform::ApplyGPUWwmaToNVVMConversionPatternsOp)</a></li><li><a href=#transformapply_patternsgpuunroll_vectors_subgroup_mma-transformapplyunrollvectorssubgroupmmaop><code>transform.apply_patterns.gpu.unroll_vectors_subgroup_mma</code> (transform::ApplyUnrollVectorsSubgroupMmaOp)</a></li><li><a href=#transformapply_patternsgpueliminate_barriers-transformeliminatebarriersop><code>transform.apply_patterns.gpu.eliminate_barriers</code> (transform::EliminateBarriersOp)</a></li><li><a href=#transformgpumap_forall_to_blocks-transformmapforalltoblocks><code>transform.gpu.map_forall_to_blocks</code> (transform::MapForallToBlocks)</a></li><li><a href=#transformgpumap_nested_forall_to_threads-transformmapnestedforalltothreads><code>transform.gpu.map_nested_forall_to_threads</code> (transform::MapNestedForallToThreads)</a></li></ul></li><li><a href=#loop-extension-transform-operations>Loop (extension) Transform Operations</a><ul><li><a href=#transformloophoist_loop_invariant_subsets-transformhoistloopinvariantsubsetsop><code>transform.loop.hoist_loop_invariant_subsets</code> (transform::HoistLoopInvariantSubsetsOp)</a></li></ul></li><li><a href=#loop-scf-transform-operations>Loop (SCF) Transform Operations</a><ul><li><a href=#transformapply_patternsscffor_loop_canonicalization-transformapplyforloopcanonicalizationpatternsop><code>transform.apply_patterns.scf.for_loop_canonicalization</code> (transform::ApplyForLoopCanonicalizationPatternsOp)</a></li><li><a href=#transformapply_conversion_patternsscfstructural_conversions-transformapplyscfstructuralconversionpatternsop><code>transform.apply_conversion_patterns.scf.structural_conversions</code> (transform::ApplySCFStructuralConversionPatternsOp)</a></li><li><a href=#transformapply_conversion_patternsscfscf_to_control_flow-transformapplyscftocontrolflowpatternsop><code>transform.apply_conversion_patterns.scf.scf_to_control_flow</code> (transform::ApplySCFToControlFlowPatternsOp)</a></li><li><a href=#transformloopforall_to_for-transformforalltoforop><code>transform.loop.forall_to_for</code> (transform::ForallToForOp)</a></li><li><a href=#transformloopforall_to_parallel-transformforalltoparallelop><code>transform.loop.forall_to_parallel</code> (transform::ForallToParallelOp)</a></li><li><a href=#transformloopcoalesce-transformloopcoalesceop><code>transform.loop.coalesce</code> (transform::LoopCoalesceOp)</a></li><li><a href=#transformloopfuse_sibling-transformloopfusesiblingop><code>transform.loop.fuse_sibling</code> (transform::LoopFuseSiblingOp)</a></li><li><a href=#transformloopoutline-transformloopoutlineop><code>transform.loop.outline</code> (transform::LoopOutlineOp)</a></li><li><a href=#transformlooppeel-transformlooppeelop><code>transform.loop.peel</code> (transform::LoopPeelOp)</a></li><li><a href=#transformlooppipeline-transformlooppipelineop><code>transform.loop.pipeline</code> (transform::LoopPipelineOp)</a></li><li><a href=#transformlooppromote_if_one_iteration-transformlooppromoteifoneiterationop><code>transform.loop.promote_if_one_iteration</code> (transform::LoopPromoteIfOneIterationOp)</a></li><li><a href=#transformloopunroll_and_jam-transformloopunrollandjamop><code>transform.loop.unroll_and_jam</code> (transform::LoopUnrollAndJamOp)</a></li><li><a href=#transformloopunroll-transformloopunrollop><code>transform.loop.unroll</code> (transform::LoopUnrollOp)</a></li><li><a href=#transformscftake_assumed_branch-transformtakeassumedbranchop><code>transform.scf.take_assumed_branch</code> (transform::TakeAssumedBranchOp)</a></li></ul></li><li><a href=#memref-transform-operations>MemRef Transform Operations</a><ul><li><a href=#transformapply_patternsmemrefalloc_to_alloca-transformapplyalloctoallocaop><code>transform.apply_patterns.memref.alloc_to_alloca</code> (transform::ApplyAllocToAllocaOp)</a></li><li><a href=#transformapply_patternsmemrefexpand_ops-transformapplyexpandopspatternsop><code>transform.apply_patterns.memref.expand_ops</code> (transform::ApplyExpandOpsPatternsOp)</a></li><li><a href=#transformapply_patternsmemrefexpand_strided_metadata-transformapplyexpandstridedmetadatapatternsop><code>transform.apply_patterns.memref.expand_strided_metadata</code> (transform::ApplyExpandStridedMetadataPatternsOp)</a></li><li><a href=#transformapply_patternsmemrefextract_address_computations-transformapplyextractaddresscomputationspatternsop><code>transform.apply_patterns.memref.extract_address_computations</code> (transform::ApplyExtractAddressComputationsPatternsOp)</a></li><li><a href=#transformapply_patternsmemreffold_memref_alias_ops-transformapplyfoldmemrefaliasopspatternsop><code>transform.apply_patterns.memref.fold_memref_alias_ops</code> (transform::ApplyFoldMemrefAliasOpsPatternsOp)</a></li><li><a href=#transformapply_patternsmemrefresolve_ranked_shaped_type_result_dims-transformapplyresolverankedshapedtyperesultdimspatternsop><code>transform.apply_patterns.memref.resolve_ranked_shaped_type_result_dims</code> (transform::ApplyResolveRankedShapedTypeResultDimsPatternsOp)</a></li><li><a href=#transformmemrefalloca_to_global-transformmemrefallocatoglobalop><code>transform.memref.alloca_to_global</code> (transform::MemRefAllocaToGlobalOp)</a></li><li><a href=#transformmemreferase_dead_alloc_and_stores-transformmemreferasedeadallocandstoresop><code>transform.memref.erase_dead_alloc_and_stores</code> (transform::MemRefEraseDeadAllocAndStoresOp)</a></li><li><a href=#transformmemrefmake_loop_independent-transformmemrefmakeloopindependentop><code>transform.memref.make_loop_independent</code> (transform::MemRefMakeLoopIndependentOp)</a></li><li><a href=#transformmemrefmultibuffer-transformmemrefmultibufferop><code>transform.memref.multibuffer</code> (transform::MemRefMultiBufferOp)</a></li><li><a href=#transformapply_conversion_patternsmemrefmemref_to_llvm_type_converter-transformmemreftollvmtypeconverterop><code>transform.apply_conversion_patterns.memref.memref_to_llvm_type_converter</code> (transform::MemrefToLLVMTypeConverterOp)</a></li></ul></li><li><a href=#pdl-extension-transform-operations>PDL (extension) Transform Operations</a><ul><li><a href=#transformpdl_match-transformpdlmatchop><code>transform.pdl_match</code> (transform::PDLMatchOp)</a></li><li><a href=#transformwith_pdl_patterns-transformwithpdlpatternsop><code>transform.with_pdl_patterns</code> (transform::WithPDLPatternsOp)</a></li></ul></li><li><a href=#structured-linalg-match-operations>Structured (Linalg) Match Operations</a><ul><li><a href=#transformmatchstructuredbody-transformmatchstructuredbodyop><code>transform.match.structured.body</code> (transform::MatchStructuredBodyOp)</a></li><li><a href=#transformmatchstructuredclassify_contraction_dims-transformmatchstructuredclassifycontractiondimsop><code>transform.match.structured.classify_contraction_dims</code> (transform::MatchStructuredClassifyContractionDimsOp)</a></li><li><a href=#transformmatchstructuredclassify_convolution_dims-transformmatchstructuredclassifyconvolutiondimsop><code>transform.match.structured.classify_convolution_dims</code> (transform::MatchStructuredClassifyConvolutionDimsOp)</a></li><li><a href=#transformmatchstructureddim-transformmatchstructureddimop><code>transform.match.structured.dim</code> (transform::MatchStructuredDimOp)</a></li><li><a href=#transformmatchstructuredelemental_bitwidth-transformmatchstructuredelementalbitwidthop><code>transform.match.structured.elemental_bitwidth</code> (transform::MatchStructuredElementalBitwidthOp)</a></li><li><a href=#transformmatchstructuredinit-transformmatchstructuredinitop><code>transform.match.structured.init</code> (transform::MatchStructuredInitOp)</a></li><li><a href=#transformmatchstructuredinput-transformmatchstructuredinputop><code>transform.match.structured.input</code> (transform::MatchStructuredInputOp)</a></li><li><a href=#transformmatchstructurednum_inits-transformmatchstructurednuminitsop><code>transform.match.structured.num_inits</code> (transform::MatchStructuredNumInitsOp)</a></li><li><a href=#transformmatchstructurednum_inputs-transformmatchstructurednuminputsop><code>transform.match.structured.num_inputs</code> (transform::MatchStructuredNumInputsOp)</a></li><li><a href=#transformmatchstructured-transformmatchstructuredop><code>transform.match.structured</code> (transform::MatchStructuredOp)</a></li><li><a href=#transformmatchstructuredrank-transformmatchstructuredrankop><code>transform.match.structured.rank</code> (transform::MatchStructuredRankOp)</a></li><li><a href=#transformmatchstructuredresult-transformmatchstructuredresultop><code>transform.match.structured.result</code> (transform::MatchStructuredResultOp)</a></li><li><a href=#transformmatchstructuredyield-transformmatchstructuredyieldop><code>transform.match.structured.yield</code> (transform::MatchStructuredYieldOp)</a></li></ul></li><li><a href=#structured-linalg-transform-operations>Structured (Linalg) Transform Operations</a><ul><li><a href=#transformapply_patternslinalgdecompose_pack_unpack-transformapplydecomposetensorpackunpackpatternsop><code>transform.apply_patterns.linalg.decompose_pack_unpack</code> (transform::ApplyDecomposeTensorPackUnpackPatternsOp)</a></li><li><a href=#transformapply_patternslinalgerase_unnecessary_inputs-transformapplyeraseunnecessaryinputspatternsop><code>transform.apply_patterns.linalg.erase_unnecessary_inputs</code> (transform::ApplyEraseUnnecessaryInputsPatternsOp)</a></li><li><a href=#transformapply_patternslinalgfold_add_into_dest-transformapplyfoldaddintodestpatternsop><code>transform.apply_patterns.linalg.fold_add_into_dest</code> (transform::ApplyFoldAddIntoDestPatternsOp)</a></li><li><a href=#transformapply_patternslinalgfold_unit_extent_dims_via_reshapes-transformapplyfoldunitextentdimsviareshapespatternsop><code>transform.apply_patterns.linalg.fold_unit_extent_dims_via_reshapes</code> (transform::ApplyFoldUnitExtentDimsViaReshapesPatternsOp)</a></li><li><a href=#transformapply_patternslinalgfold_unit_extent_dims_via_slices-transformapplyfoldunitextentdimsviaslicespatternsop><code>transform.apply_patterns.linalg.fold_unit_extent_dims_via_slices</code> (transform::ApplyFoldUnitExtentDimsViaSlicesPatternsOp)</a></li><li><a href=#transformapply_patternslinalgpad_vectorization-transformapplypadvectorizationpatternsop><code>transform.apply_patterns.linalg.pad_vectorization</code> (transform::ApplyPadVectorizationPatternsOp)</a></li><li><a href=#transformapply_patternslinalgtiling_canonicalization-transformapplytilingcanonicalizationpatternsop><code>transform.apply_patterns.linalg.tiling_canonicalization</code> (transform::ApplyTilingCanonicalizationPatternsOp)</a></li><li><a href=#transformstructuredbufferize_to_allocation-transformbufferizetoallocationop><code>transform.structured.bufferize_to_allocation</code> (transform::BufferizeToAllocationOp)</a></li><li><a href=#transformstructuredcontinuous_tile_sizes-transformcontinuoustilesizesop><code>transform.structured.continuous_tile_sizes</code> (transform::ContinuousTileSizesOp)</a></li><li><a href=#transformstructuredconvert_conv2d_to_img2col-transformconvertconv2dtoimg2colop><code>transform.structured.convert_conv2d_to_img2col</code> (transform::ConvertConv2DToImg2ColOp)</a></li><li><a href=#transformstructuredconvert_to_loops-transformconverttoloopsop><code>transform.structured.convert_to_loops</code> (transform::ConvertToLoopsOp)</a></li><li><a href=#transformstructureddecompose_interface-transformdecomposeinterfaceop><code>transform.structured.decompose_interface</code> (transform::DecomposeInterfaceOp)</a></li><li><a href=#transformstructureddecompose-transformdecomposeop><code>transform.structured.decompose</code> (transform::DecomposeOp)</a></li><li><a href=#transformstructureddecompose_winograd_op-transformdecomposewinogradop><code>transform.structured.decompose_winograd_op</code> (transform::DecomposeWinogradOp)</a></li><li><a href=#transformstructuredeliminate_empty_tensors-transformeliminatelinalgopanchoredemptytensorsop><code>transform.structured.eliminate_empty_tensors</code> (transform::EliminateLinalgOpAnchoredEmptyTensorsOp)</a></li><li><a href=#transformstructuredflatten_elementwise-transformflattenelementwiselinalgop><code>transform.structured.flatten_elementwise</code> (transform::FlattenElementwiseLinalgOp)</a></li><li><a href=#transformstructuredfuse_into_containing_op-transformfuseintocontainingop><code>transform.structured.fuse_into_containing_op</code> (transform::FuseIntoContainingOp)</a></li><li><a href=#transformstructuredfuse-transformfuseop><code>transform.structured.fuse</code> (transform::FuseOp)</a></li><li><a href=#transformstructuredgeneralize-transformgeneralizeop><code>transform.structured.generalize</code> (transform::GeneralizeOp)</a></li><li><a href=#transformstructuredhoist_padbuild_packing_loop_nest-transformhoistpadbuildpackingloopnestop><code>transform.structured.hoist_pad.build_packing_loop_nest</code> (transform::HoistPadBuildPackingLoopNestOp)</a></li><li><a href=#transformstructuredhoist_pad-transformhoistpadop><code>transform.structured.hoist_pad</code> (transform::HoistPadOp)</a></li><li><a href=#transformstructuredhoist_redundant_vector_broadcasts-transformhoistredundantvectorbroadcastsop><code>transform.structured.hoist_redundant_vector_broadcasts</code> (transform::HoistRedundantVectorBroadcastsOp)</a></li><li><a href=#transformstructuredhoist_redundant_vector_transfers-transformhoistredundantvectortransfersop><code>transform.structured.hoist_redundant_vector_transfers</code> (transform::HoistRedundantVectorTransfersOp)</a></li><li><a href=#transformstructuredinsert_slice_to_copy-transforminsertslicetocopyop><code>transform.structured.insert_slice_to_copy</code> (transform::InsertSliceToCopyOp)</a></li><li><a href=#transformstructuredinterchange-transforminterchangeop><code>transform.structured.interchange</code> (transform::InterchangeOp)</a></li><li><a href=#transformstructuredlower_pack-transformlowerpackop><code>transform.structured.lower_pack</code> (transform::LowerPackOp)</a></li><li><a href=#transformstructuredlower_unpack-transformlowerunpackop><code>transform.structured.lower_unpack</code> (transform::LowerUnPackOp)</a></li><li><a href=#transformstructuredgpumap_copy_to_threads-transformmapcopytothreadsop><code>transform.structured.gpu.map_copy_to_threads</code> (transform::MapCopyToThreadsOp)</a></li><li><a href=#transformstructuredmatch-transformmatchop><code>transform.structured.match</code> (transform::MatchOp)</a></li><li><a href=#transformstructuredmultitile_sizes-transformmultitilesizesop><code>transform.structured.multitile_sizes</code> (transform::MultiTileSizesOp)</a></li><li><a href=#transformstructuredpack_greedily-transformpackgreedilyop><code>transform.structured.pack_greedily</code> (transform::PackGreedilyOp)</a></li><li><a href=#transformstructuredpack-transformpackop><code>transform.structured.pack</code> (transform::PackOp)</a></li><li><a href=#transformstructuredpack_transpose-transformpacktransposeop><code>transform.structured.pack_transpose</code> (transform::PackTransposeOp)</a></li><li><a href=#transformstructuredpad-transformpadop><code>transform.structured.pad</code> (transform::PadOp)</a></li><li><a href=#transformstructuredpromote-transformpromoteop><code>transform.structured.promote</code> (transform::PromoteOp)</a></li><li><a href=#transformstructuredreplace-transformreplaceop><code>transform.structured.replace</code> (transform::ReplaceOp)</a></li><li><a href=#transformstructuredrewrite_in_destination_passing_style-transformrewriteindestinationpassingstyleop><code>transform.structured.rewrite_in_destination_passing_style</code> (transform::RewriteInDestinationPassingStyleOp)</a></li><li><a href=#transformstructuredscalarize-transformscalarizeop><code>transform.structured.scalarize</code> (transform::ScalarizeOp)</a></li><li><a href=#transformstructuredspecialize-transformspecializeop><code>transform.structured.specialize</code> (transform::SpecializeOp)</a></li><li><a href=#transformstructuredsplit-transformsplitop><code>transform.structured.split</code> (transform::SplitOp)</a></li><li><a href=#transformstructuredsplit_reduction-transformsplitreductionop><code>transform.structured.split_reduction</code> (transform::SplitReductionOp)</a></li><li><a href=#transformstructuredtile_reduction_using_for-transformtilereductionusingforop><code>transform.structured.tile_reduction_using_for</code> (transform::TileReductionUsingForOp)</a></li><li><a href=#transformstructuredtile_reduction_using_forall-transformtilereductionusingforallop><code>transform.structured.tile_reduction_using_forall</code> (transform::TileReductionUsingForallOp)</a></li><li><a href=#transformstructuredtile_using_for-transformtileusingforop><code>transform.structured.tile_using_for</code> (transform::TileUsingForOp)</a></li><li><a href=#transformstructuredtile_using_forall-transformtileusingforallop><code>transform.structured.tile_using_forall</code> (transform::TileUsingForallOp)</a></li><li><a href=#transformstructuredtranspose_conv2d-transformtransposeconv2dop><code>transform.structured.transpose_conv2d</code> (transform::TransposeConv2DOp)</a></li><li><a href=#transformstructuredtranspose_matmul-transformtransposematmulop><code>transform.structured.transpose_matmul</code> (transform::TransposeMatmulOp)</a></li><li><a href=#transformstructuredvectorize_children_and_apply_patterns-transformvectorizechildrenandapplypatternsop><code>transform.structured.vectorize_children_and_apply_patterns</code> (transform::VectorizeChildrenAndApplyPatternsOp)</a></li><li><a href=#transformstructuredvectorize-transformvectorizeop><code>transform.structured.vectorize</code> (transform::VectorizeOp)</a></li><li><a href=#transformstructuredwinograd_conv2d-transformwinogradconv2dop><code>transform.structured.winograd_conv2d</code> (transform::WinogradConv2DOp)</a></li></ul></li><li><a href=#tensor-transform-operations>Tensor Transform Operations</a><ul><li><a href=#transformapply_patternstensordecompose_concat-transformapplydecomposetensorconcatpatternsop><code>transform.apply_patterns.tensor.decompose_concat</code> (transform::ApplyDecomposeTensorConcatPatternsOp)</a></li><li><a href=#transformapply_patternstensordrop_redundant_insert_slice_rank_expansion-transformapplydropredundantinsertslicerankexpansionpatternsop><code>transform.apply_patterns.tensor.drop_redundant_insert_slice_rank_expansion</code> (transform::ApplyDropRedundantInsertSliceRankExpansionPatternsOp)</a></li><li><a href=#transformapply_patternstensorfold_into_pack_and_unpack-transformapplyfoldintopackandunpackpatternsop><code>transform.apply_patterns.tensor.fold_into_pack_and_unpack</code> (transform::ApplyFoldIntoPackAndUnpackPatternsOp)</a></li><li><a href=#transformapply_patternstensorfold_tensor_empty-transformapplyfoldtensoremptypatternsop><code>transform.apply_patterns.tensor.fold_tensor_empty</code> (transform::ApplyFoldTensorEmptyPatternsOp)</a></li><li><a href=#transformapply_patternstensorfold_tensor_subset_ops_into_vector_transfers-transformapplyfoldtensorsubsetopsintovectortransferspatternsop><code>transform.apply_patterns.tensor.fold_tensor_subset_ops_into_vector_transfers</code> (transform::ApplyFoldTensorSubsetOpsIntoVectorTransfersPatternsOp)</a></li><li><a href=#transformapply_patternstensorfold_tensor_subset_ops-transformapplyfoldtensorsubsetopspatternsop><code>transform.apply_patterns.tensor.fold_tensor_subset_ops</code> (transform::ApplyFoldTensorSubsetOpsPatternsOp)</a></li><li><a href=#transformapply_patternstensormerge_consecutive_insert_extract_slice-transformapplymergeconsecutiveinsertextractslicepatternsop><code>transform.apply_patterns.tensor.merge_consecutive_insert_extract_slice</code> (transform::ApplyMergeConsecutiveInsertExtractSlicePatternsOp)</a></li><li><a href=#transformapply_patternstensorreassociative_reshape_folding-transformapplyreassociativereshapefoldingpatternsop><code>transform.apply_patterns.tensor.reassociative_reshape_folding</code> (transform::ApplyReassociativeReshapeFoldingPatternsOp)</a></li><li><a href=#transformapply_patternstensorrewrite_as_constant-transformapplyrewritetensoropsasconstantpatternsop><code>transform.apply_patterns.tensor.rewrite_as_constant</code> (transform::ApplyRewriteTensorOpsAsConstantPatternsOp)</a></li><li><a href=#transformtensormake_loop_independent-transformmakeloopindependentop><code>transform.tensor.make_loop_independent</code> (transform::MakeLoopIndependentOp)</a></li><li><a href=#transformtype_conversiontensorcast_shape_dynamic_dims-transformtypeconversioncastshapedynamicdimsop><code>transform.type_conversion.tensor.cast_shape_dynamic_dims</code> (transform::TypeConversionCastShapeDynamicDimsOp)</a></li></ul></li><li><a href=#vector-transform-operations>Vector Transform Operations</a><ul><li><a href=#transformapply_patternsvectorcast_away_vector_leading_one_dim-transformapplycastawayvectorleadingonedimpatternsop><code>transform.apply_patterns.vector.cast_away_vector_leading_one_dim</code> (transform::ApplyCastAwayVectorLeadingOneDimPatternsOp)</a></li><li><a href=#transformapply_patternsvectordrop_unit_dims_with_shape_cast-transformapplydropunitdimwithshapecastpatternsop><code>transform.apply_patterns.vector.drop_unit_dims_with_shape_cast</code> (transform::ApplyDropUnitDimWithShapeCastPatternsOp)</a></li><li><a href=#transformapply_patternsvectorfold_arith_extension-transformapplyfoldarithextensionpatternsop><code>transform.apply_patterns.vector.fold_arith_extension</code> (transform::ApplyFoldArithExtensionPatternsOp)</a></li><li><a href=#transformapply_patternsvectorelementwise_to_vector-transformapplyfoldelementwisetovectorpatternsop><code>transform.apply_patterns.vector.elementwise_to_vector</code> (transform::ApplyFoldElementwiseToVectorPatternsOp)</a></li><li><a href=#transformapply_patternsvectorinterleave_to_shuffle-transformapplyinterleavetoshufflepatternsop><code>transform.apply_patterns.vector.interleave_to_shuffle</code> (transform::ApplyInterleaveToShufflePatternsOp)</a></li><li><a href=#transformapply_patternsvectorlower_bitcast-transformapplylowerbitcastpatternsop><code>transform.apply_patterns.vector.lower_bitcast</code> (transform::ApplyLowerBitCastPatternsOp)</a></li><li><a href=#transformapply_patternsvectorlower_broadcast-transformapplylowerbroadcastpatternsop><code>transform.apply_patterns.vector.lower_broadcast</code> (transform::ApplyLowerBroadcastPatternsOp)</a></li><li><a href=#transformapply_patternsvectorlower_contraction-transformapplylowercontractionpatternsop><code>transform.apply_patterns.vector.lower_contraction</code> (transform::ApplyLowerContractionPatternsOp)</a></li><li><a href=#transformapply_patternsvectorlower_create_mask-transformapplylowercreatemaskpatternsop><code>transform.apply_patterns.vector.lower_create_mask</code> (transform::ApplyLowerCreateMaskPatternsOp)</a></li><li><a href=#transformapply_patternsvectorlower_gather-transformapplylowergatherpatternsop><code>transform.apply_patterns.vector.lower_gather</code> (transform::ApplyLowerGatherPatternsOp)</a></li><li><a href=#transformapply_patternsvectorlower_interleave-transformapplylowerinterleavepatternsop><code>transform.apply_patterns.vector.lower_interleave</code> (transform::ApplyLowerInterleavePatternsOp)</a></li><li><a href=#transformapply_patternsvectorlower_masked_transfers-transformapplylowermaskedtransferspatternsop><code>transform.apply_patterns.vector.lower_masked_transfers</code> (transform::ApplyLowerMaskedTransfersPatternsOp)</a></li><li><a href=#transformapply_patternsvectorlower_masks-transformapplylowermaskspatternsop><code>transform.apply_patterns.vector.lower_masks</code> (transform::ApplyLowerMasksPatternsOp)</a></li><li><a href=#transformapply_patternsvectorlower_multi_reduction-transformapplylowermultireductionpatternsop><code>transform.apply_patterns.vector.lower_multi_reduction</code> (transform::ApplyLowerMultiReductionPatternsOp)</a></li><li><a href=#transformapply_patternsvectorlower_outerproduct-transformapplylowerouterproductpatternsop><code>transform.apply_patterns.vector.lower_outerproduct</code> (transform::ApplyLowerOuterProductPatternsOp)</a></li><li><a href=#transformapply_patternsvectorlower_scan-transformapplylowerscanpatternsop><code>transform.apply_patterns.vector.lower_scan</code> (transform::ApplyLowerScanPatternsOp)</a></li><li><a href=#transformapply_patternsvectorlower_shape_cast-transformapplylowershapecastpatternsop><code>transform.apply_patterns.vector.lower_shape_cast</code> (transform::ApplyLowerShapeCastPatternsOp)</a></li><li><a href=#transformapply_patternsvectorlower_transfer-transformapplylowertransferpatternsop><code>transform.apply_patterns.vector.lower_transfer</code> (transform::ApplyLowerTransferPatternsOp)</a></li><li><a href=#transformapply_patternsvectorlower_transpose-transformapplylowertransposepatternsop><code>transform.apply_patterns.vector.lower_transpose</code> (transform::ApplyLowerTransposePatternsOp)</a></li><li><a href=#transformapply_patternsvectormaterialize_masks-transformapplymaterializemaskspatternsop><code>transform.apply_patterns.vector.materialize_masks</code> (transform::ApplyMaterializeMasksPatternsOp)</a></li><li><a href=#transformapply_patternsvectorrank_reducing_subview_patterns-transformapplyrankreducingsubviewpatternsop><code>transform.apply_patterns.vector.rank_reducing_subview_patterns</code> (transform::ApplyRankReducingSubviewPatternsOp)</a></li><li><a href=#transformapply_patternsvectorrewrite_narrow_types-transformapplyrewritenarrowtypepatternsop><code>transform.apply_patterns.vector.rewrite_narrow_types</code> (transform::ApplyRewriteNarrowTypePatternsOp)</a></li><li><a href=#transformapply_patternsvectorsplit_transfer_full_partial-transformapplysplittransferfullpartialpatternsop><code>transform.apply_patterns.vector.split_transfer_full_partial</code> (transform::ApplySplitTransferFullPartialPatternsOp)</a></li><li><a href=#transformapply_patternsvectortransfer_permutation_patterns-transformapplytransferpermutationpatternsop><code>transform.apply_patterns.vector.transfer_permutation_patterns</code> (transform::ApplyTransferPermutationPatternsOp)</a></li><li><a href=#transformapply_patternsvectortransfer_to_scf-transformapplytransfertoscfpatternsop><code>transform.apply_patterns.vector.transfer_to_scf</code> (transform::ApplyTransferToScfPatternsOp)</a></li><li><a href=#transformapply_patternsvectorreduction_to_contract-transformapplyvectorreductiontocontractpatternsop><code>transform.apply_patterns.vector.reduction_to_contract</code> (transform::ApplyVectorReductionToContractPatternsOp)</a></li><li><a href=#transformapply_conversion_patternsvectorvector_to_llvm-transformapplyvectortollvmconversionpatternsop><code>transform.apply_conversion_patterns.vector.vector_to_llvm</code> (transform::ApplyVectorToLLVMConversionPatternsOp)</a></li></ul></li><li><a href=#transformhandletypeinterface-transformhandletypeinterface>TransformHandleTypeInterface (<code>TransformHandleTypeInterface</code>)</a><ul><li><a href=#methods>Methods:</a></li></ul></li><li><a href=#transformparamtypeinterface-transformparamtypeinterface>TransformParamTypeInterface (<code>TransformParamTypeInterface</code>)</a><ul><li><a href=#methods-1>Methods:</a></li></ul></li><li><a href=#transformvaluehandletypeinterface-transformvaluehandletypeinterface>TransformValueHandleTypeInterface (<code>TransformValueHandleTypeInterface</code>)</a><ul><li><a href=#methods-2>Methods:</a></li></ul></li><li><a href=#conversionpatterndescriptoropinterface-conversionpatterndescriptoropinterface>ConversionPatternDescriptorOpInterface (<code>ConversionPatternDescriptorOpInterface</code>)</a><ul><li><a href=#methods-3>Methods:</a></li></ul></li><li><a href=#findpayloadreplacementopinterface-findpayloadreplacementopinterface>FindPayloadReplacementOpInterface (<code>FindPayloadReplacementOpInterface</code>)</a><ul><li><a href=#methods-4>Methods:</a></li></ul></li><li><a href=#patterndescriptoropinterface-patterndescriptoropinterface>PatternDescriptorOpInterface (<code>PatternDescriptorOpInterface</code>)</a><ul><li><a href=#methods-5>Methods:</a></li></ul></li><li><a href=#transformopinterface-transformopinterface>TransformOpInterface (<code>TransformOpInterface</code>)</a><ul><li><a href=#methods-6>Methods:</a></li></ul></li><li><a href=#typeconverterbuilderopinterface-typeconverterbuilderopinterface>TypeConverterBuilderOpInterface (<code>TypeConverterBuilderOpInterface</code>)</a><ul><li><a href=#methods-7>Methods:</a></li></ul></li></ul></nav><h2 id=overview>Overview <a class=headline-hash href=#overview>¶</a></h2><p>This dialect provides operations that can be used to control transformation of the IR using a different portion of the IR. It refers to the IR being transformed as payload IR, and to the IR guiding the transformation as transform IR.</p><p>The main use case for this dialect is orchestrating fine-grain transformations on individual IR objects (operations or values) or sets thereof. For example, it may involve finding loop-like operations with specific properties (e.g., large size) in the payload IR, applying loop tiling to those and only those operations, and then applying loop unrolling to the inner loops produced by the previous transformations. As such, it is not intended as a replacement for the pass infrastructure, nor for the pattern rewriting infrastructure. In the most common case, the transform IR will be processed and applied to the payload IR by a pass. Transformations expressed by the Transform dialect may be implemented using the pattern infrastructure or any other relevant MLIR component.</p><p>The following IR gives a rough idea of what the operations in this dialect may look like without using actually existing operations:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%0</span> <span class=p>=</span> transform<span class=p>.</span>loop<span class=p>.</span>find <span class=p>{</span> size <span class=p>></span> <span class=m>42</span> <span class=p>}</span> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>interface<span class=p><</span>tileable<span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%1</span> <span class=p>=</span> transform<span class=p>.</span>compute_trailing_tile_size <span class=nv>%0</span> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>param<span class=p><</span><span class=k>index</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%2</span><span class=p>:</span><span class=nl>2 =</span> transform<span class=p>.</span>loop<span class=p>.</span>tile <span class=nv>%0</span> tile_sizes<span class=p>(</span><span class=m>1</span><span class=p>,</span> <span class=m>4</span><span class=p>,</span> <span class=nv>%1</span><span class=p>)</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>(!</span>transform<span class=p>.</span>interface<span class=p><</span>tileable<span class=p>>)</span> </span></span><span class=line><span class=cl> <span class=p>-></span> <span class=p>(!</span>transform<span class=p>.</span>op<span class=p><</span>loop<span class=p>>,</span> <span class=p>!</span>transform<span class=p>.</span>op<span class=p><</span>loop<span class=p>>)</span> </span></span><span class=line><span class=cl><span class=nv>%3</span> <span class=p>=</span> transform<span class=p>.</span>get_op_result <span class=p>[</span><span class=m>0</span><span class=p>]</span> <span class=nv>%2#0</span> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_value </span></span><span class=line><span class=cl>transform<span class=p>.</span>assign_to_fast_memory <span class=nv>%3</span> </span></span><span class=line><span class=cl>transform<span class=p>.</span>loop<span class=p>.</span>unroll <span class=nv>%1#1</span> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>op<span class=p><</span>loop<span class=p>></span> </span></span></code></pre></div><p>The values used in the Transform dialect may correspond to:</p><ul><li><p>sets of operations in the payload IR;</p></li><li><p>sets of values in the payload IR;</p></li><li><p>sets of parameters (attributes) known at the execution time of the transform dialect.</p></li></ul><p>The former two kinds of values are also referred to as operation and value <em>handles</em>, respectively. In the example above, <code>%0</code> corresponds to the set of loops found in the payload IR that satisfy the condition, and <code>%2</code> correspond to groups of outer and inner loops, respectively, produced by the tiling transformation. <code>%3</code> corresponds to a set of values that are produced by the outer loops after tiling. <code>%1</code> corresponds to a list of tile sizes selected for each of the operations that <code>%0</code> corresponds to.</p><p>An operation handle such as <code>%0</code> may be associated with multiple payload operations. This is conceptually a set of operations and no assumptions should be made about the order of ops unless specified otherwise by the operation. Similarly, a value handle such as <code>%3</code> may be associated with a set of payload IR values. Transform dialect operations may take as operands and produce an arbitrary combination of values representing handles and parameters. Most Transform IR ops support operand values that are mapped to multiple payload objects. They usually apply the respective transformation for every mapped object (“batched execution”). Deviations from this convention are described in the documentation of Transform IR ops.</p><p>Parameters, such as <code>%1</code> in the above example, have two logical roles in transform IR. In parameter based control, they carry the values needed to execute the explicit control defined by the transforms, for example:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%0</span> <span class=p>=</span> transform<span class=p>.</span>match<span class=p>.</span>structured<span class=p>.</span>rank <span class=nv>%linalg_op_handle</span> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>param<span class=p><</span><span class=k>index</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%1</span> <span class=p>=</span> transform<span class=p>.</span>param<span class=p>.</span><span class=kt>constant</span> <span class=m>3</span> <span class=p>:</span> <span class=k>i32</span> <span class=p>-></span> <span class=p>!</span>transform<span class=p>.</span>param<span class=p><</span><span class=k>index</span><span class=p>></span> </span></span><span class=line><span class=cl>transform<span class=p>.</span>execute_if_cmpi eq <span class=nv>%0</span><span class=p>,</span> <span class=nv>%1</span> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>param<span class=p><</span><span class=k>index</span><span class=p>>,</span> <span class=p>!</span>transform<span class=p>.</span>param<span class=p><</span><span class=k>index</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=c>// Some nested body of transform ops </span></span></span></code></pre></div><p>Alternatively, parameters can associate with the payload IR where the specific value at execution time has no bearing on the execution of the transform IR. In other words, parameters can either associate with the transform IR or the payload IR. Note that it is generally discouraged to use parameters containing arbitrary attributes within transform control. Parameter based control should try to be explicitly typed when possible.</p><p>The transform IR values have transform IR types, which should implement exactly one of:</p><ul><li><p><a href=#transformhandletypeinterface-transformhandletypeinterface>TransformHandleTypeInterface</a>,</p></li><li><p><a href=#transformvaluehandletypeinterface-transformvaluehandletypeinterface>TransformValueHandleTypeInterface</a>,</p></li><li><p><a href=#transformparamtypeinterface-transformparamtypeinterface>TransformParamTypeInterface</a>.</p></li></ul><p>The goal of these type interfaces, beyond providing a common base for accepted types, is to verify the properties of the associated objects. For example, a handle type interface implementation may check whether all associated payload IR operations implement the “TileableOp” interface or have a specific “loop” kind. Similarly, a value handle type interface implementation may check if the associated payload IR values are block arguments or have a specific type, or a parameter type interface may check whether the associated attributes contain non-negative integer values. These properties are used to statically indicate pre- and post-conditions of a transformation connected to a Transform dialect operation. The conditions are verified when payload objects operations are first associated with a transform handle. By convention, Transform dialect operations are expected to indicate narrow preconditions for their operands by enforcing operand type constraints in the their definitions and verifiers. On the contrary, operations are expected to have few constraints on their results. Specific instances of a transform operation can then be created with a more restricted result type than the constraint in the operation (e.g., the “find” operation only constrains the result type to be a transform IR type while its concrete instance can have a type with stricter constraints such as implementing the “tilable” interface). The verification will then happen at transform execution time. This approach allows one to capture payload IR operation properties in the transform IR without resorting to excessive use of type casts or coupling dialect extensions between themselves. It is a trade-off between verbosity/complexity and static hardening, which can be revised in the future.</p><p>Overall, Transform IR ops are expected to be contained in a single top-level op. Such top-level ops specify how to apply the transformations described by the operations they contain, e.g., <code>transform.sequence</code> executes transformations one by one and fails if any of them fails. Such ops are expected to have the <code>PossibleTopLevelTransformOpTrait</code> and may be used without arguments.</p><p>A program transformation expressed using the Transform dialect can be programmatically triggered by calling:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=n>LogicalResult</span> <span class=n>transform</span><span class=o>::</span><span class=n>applyTransforms</span><span class=p>(</span> </span></span><span class=line><span class=cl> <span class=n>Operation</span> <span class=o>*</span><span class=n>payloadRoot</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=k>const</span> <span class=n>RaggedArray</span><span class=o><</span><span class=n>transform</span><span class=o>::</span><span class=n>MappedValue</span><span class=o>></span> <span class=o>&</span><span class=n>extraMappings</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=n>TransformOpInterface</span> <span class=n>transform</span><span class=p>,</span> </span></span><span class=line><span class=cl> <span class=k>const</span> <span class=n>TransformOptions</span> <span class=o>&</span><span class=n>options</span><span class=p>);</span> </span></span></code></pre></div><p>that applies the transformations specified by the top-level <code>transform</code> to payload IR contained in <code>payloadRoot</code>. The payload root operation will be associated with the first argument of the entry block of the top-level transform op. This block may have additional arguments, handles or parameters. They will be associated with values provided as <code>extraMappings</code>. The call will report an error and return if the wrong number of mappings is provided.</p><h2 id=dialect-extension-mechanism>Dialect Extension Mechanism <a class=headline-hash href=#dialect-extension-mechanism>¶</a></h2><p>This dialect is designed to be extensible, that is, clients of this dialect are allowed to inject additional operations into this dialect using the <code>TransformDialectExtension</code> mechanism. This allows the dialect to avoid a dependency on the implementation of the transformation as well as to avoid introducing dialect-specific transform dialects. In the example above, the operations may have been injected by a notional <code>loop</code> dialect rather than defined in this dialect, hence the common prefix.</p><p>It is recommended to prefix injected operations with one or several dot-separated words that indicate which extension adds them. For dialect-specific transformations, the prefix is naturally the name of the dialect, e.g., <code>transform.affine.reschedule</code>. For dialect-agnostic transformations (typically implemented using interfaces), the prefix may be derived from the interface name or from a common concept, e.g., <code>transform.loop.tile</code> may apply to any loop-like operation that implements <code>TileableOpInterface</code>. The C++ classes for the dialect extension should include the prefix in their name, e.g., <code>AffineTransformDialectExtension</code> or <code>LoopTransformDialectExtension</code> in the cases above. Unprefixed operation names are reserved for ops defined directly in the Transform dialect.</p><p>Operations injected into the dialect must:</p><ul><li><p>Implement the <code>TransformOpInterface</code> to execute the corresponding transformation on the payload IR.</p></li><li><p>Implement the <code>MemoryEffectsOpInterface</code> to annotate the effects of the transform IR operation on the payload IR as well as on the mapping between transform IR values and payload IR operations. See below for the description of available effects.</p></li></ul><p>The presence of interface implementations is checked at runtime when the dialect is loaded to allow for those implementations to be supplied by separate dialect extensions if desired.</p><p>Similarly to operations, additional types can be injected into the dialect using the same extension mechanism. The types must:</p><ul><li>Implement exactly one of <code>TransformHandleTypeInterface</code>, <code>TransformValueHandleTypeInterface</code>, <code>TransformParamTypeInterface</code>.</li></ul><h2 id=side-effects>Side Effects <a class=headline-hash href=#side-effects>¶</a></h2><p>The Transform dialect relies on MLIR side effect modelling to enable optimization of the transform IR. More specifically, it provides several side effect resource objects and expects operations to describe their effects on these resources.</p><ul><li><p><code>TransformMappingResource</code> - side effect resource corresponding to the mapping between transform IR values and payload IR operations.</p><ul><li><p>An <code>Allocate</code> effect from this resource means creating a new mapping entry, it is always accompanied by a <code>Write</code> effect.</p></li><li><p>A <code>Read</code> effect from this resource means accessing the mapping.</p></li><li><p>A <code>Free</code> effect on this resource indicates the removal of the mapping entry, typically after a transformation that modifies the payload IR operations associated with one of the transform IR operation’s operands. It is always accompanied by a <code>Read</code> effect.</p></li></ul></li><li><p><code>PayloadIRResource</code> - side effect resource corresponding to the payload IR itself.</p><ul><li><p>A <code>Read</code> effect from this resource means accessing the payload IR.</p></li><li><p>A <code>Write</code> effect on this resource means mutating the payload IR. It is almost always accompanied by a <code>Read</code>.</p></li></ul></li></ul><p>The typical flow of values in the transform IR is as follows. Most operations produce new transform IR values and immediately associate them with a list of payload IR operations. This corresponds to <code>Allocate</code> and <code>Write</code> effects on the <code>TransformMappingResource</code>, and often requires at least a <code>Read</code> effect on the <code>PayloadIRResource</code>. Transform operations that only inspect the payload IR to produce new handles are usually limited to these effects on their operands. Transform operations that mutate the payload IR are thought to <em>consume</em> the handles provided as operands, that is have the <code>Read</code> and <code>Free</code> effects on them. As with the usual memory effects, using a value after it was freed is incorrect. In case of the transform IR, this value is likely associated with payload IR operations that were modified or even removed by the transformation, so it is meaningless to refer to them. When further transformations are desired, the transform operations can return <em>new</em> handles that can be read or consumed by subsequent operations.</p><h2 id=execution-model>Execution Model <a class=headline-hash href=#execution-model>¶</a></h2><p>The transformation starts at the user-specified top-level transform IR operation and applies to some user-specified payload IR scope, identified by the payload IR op that contains the IR to transform. It is the responsibility of the user to properly select the scope and/or to avoid the transformations to modify the IR outside of the given scope. The top-level transform IR operation may contain further transform operations and execute them in the desired order.</p><p>Transformation application functions produce a tri-state status:</p><ul><li>success;</li><li>recoverable (silenceable) failure;</li><li>irrecoverable failure.</li></ul><p>Transformation container operations may intercept recoverable failures and perform the required recovery steps thus succeeding themselves. On the other hand, they must propagate irrecoverable failures. For such failures, the diagnostics are emitted immediately whereas their emission is postponed for recoverable failures. Transformation container operations may also fail to recover from a theoretically recoverable failure, in which case they can either propagate it to their parent or emit the diagnostic and turn the failure into an irrecoverable one. A recoverable failure produced by applying the top-level transform IR operation is considered irrecoverable.</p><p>Transformation container operations are allowed to “step over” some nested operations if the application of some previous operation produced a failure. This can be conceptually thought of as having a global “recoverable error register” that is read/write accessed by each transform operation as a side effect. The transformation is skipped if the register already contains an error description, and the control flow proceeds to the following operation.</p><p>Note that a silenceable failure, if emitted, is a compiler <em>error</em> rather than a warning. Transformations are expected to produce silenceable failures if they haven’t yet modified the payload IR, i.e. when reporting a precondition failure, and an irrecoverable failure when they modified the IR in a way that is contrary to the semantics of the transform operation or would fail a postcondition. Some “navigation” operations that identify payload IR targets for the following transformation may have a conceptual “failure to match” that is considered a successful execution in the execution model but results in handles associated with empty payload IR operation lists.</p><h2 id=handle-invalidation>Handle Invalidation <a class=headline-hash href=#handle-invalidation>¶</a></h2><p>The execution model of the Transform dialect allows a payload IR operation to be associated with <em>multiple</em> handles as well as nested payload IR operations to be associated with different handles. Similarly, a payload IR value may be associated with multiple transform IR value handles. When a transform IR operation consumes a handle, it usually indicates that the corresponding payload IR object was destroyed and should no longer be referenced. Transform IR handles that <em>may</em> be pointing to an erased payload IR object are <em>invalidated</em>. The mere presence of an invalidated handle in the transform IR is not a problem, but <em>using</em> it results in undefined behavior. Invalidated handles can be thought of as dangling pointers. Note that the <em>entire</em> handle is invalidated, even if some of the payload IR objects associated with it remain live.</p><p>The following handle invalidation rules apply.</p><ul><li><p>When an operation handle is consumed, are invalidated:</p><ul><li><p>operation handles associated with one of the payload operations that the consumed handle is associated with;</p></li><li><p>operation handles associated with one of the operations <em>nested</em> in the payload operations described above;</p></li><li><p>value handles associated with any result of any operation described above;</p></li><li><p>value handles associated with any argument of a block contained in a region attached to any operation described above.</p></li></ul></li><li><p>When a value handle is consumed, are invalidated:</p><ul><li><p>operation handles associated with payload operations that produce as result any value associated with the consumed handle (when the associated is an operation result);</p></li><li><p>operation handles associated with payload operations <em>nested</em> in the payload operations described above;</p></li><li><p>operation handles associated with payload operations (recursively) <em>contained</em> in the block that defines as argument any value associated with the consumed handle (when the associated value is a block argument); note that the adjacent blocks are not affected;</p></li><li><p>value handles associated with any result of any operation described above, including all results of the operation defining as result the value associated with the consumed handle;</p></li><li><p>value handles associated with any argument of a block contained in a region attached to any operation described above.</p></li></ul></li></ul><p>More intuitively, consuming a handle invalidates any handle that may be pointing to an object defined or contained in the payload IR subtree rooted at the closest operation or block.</p><p>The Transform dialect infrastructure has the capability of checking whether the transform IR op operand is invalidated before applying the transformation. However, such a check is computationally expensive and must be enabled explicitly through <code>TransformOptions</code>. Additionally, the <code>transform-dialect-check-uses</code> pass emits warnings when a handle may be used after it has been consumed, but does so abstractly, without processing the payload IR.</p><p>Values associated with parameters (non-handles) cannot be invalidated.</p><h2 id=intended-use-and-integrations>Intended Use and Integrations <a class=headline-hash href=#intended-use-and-integrations>¶</a></h2><p>The transformation control infrastructure provided by this dialect is positioned roughly between rewrite patterns and passes. A transformation that is executed by a transform operation is likely to be sufficiently complex to require at least a set of patterns to be implemented. It is also expected to be more focused than a pass: a pass typically applies identical transformations everywhere in the IR, a transform dialect-controlled transformation would apply to a small subset of operations selected, e.g., by a pattern-matching operation or generated by a previous transformation. It is discouraged, although technically possible, to run a pass pipeline as part of the transform op implementation.</p><p>One of the main scenarios for using this dialect is fine-grain chaining of transformations. For example, a loop-like operation may see its iteration domain split into two parts, implemented as separate loops (transformation known as index-set splitting), each of which is then transformed differently (e.g., the first loop is tiled and the second unrolled) with the necessary enabling and cleanup patterns around the main transformation:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=c>// <generate %loop, e.g., by pattern-matching> </span></span></span><span class=line><span class=cl><span class=c>// ... </span></span></span><span class=line><span class=cl><span class=c></span><span class=nv>%parts</span><span class=p>:</span><span class=nl>2 =</span> transform<span class=p>.</span>loop<span class=p>.</span>split <span class=nv>%loop</span> <span class=p>{</span> <span class=nl>upper_bound_divisible_by =</span> <span class=m>8</span> <span class=p>}</span> </span></span><span class=line><span class=cl>transform<span class=p>.</span>loop<span class=p>.</span>tile <span class=nv>%parts#0</span> <span class=p>{</span> <span class=nl>tile_sizes =</span> <span class=p>[</span><span class=m>8</span><span class=p>]</span> <span class=p>}</span> </span></span><span class=line><span class=cl>transform<span class=p>.</span>loop<span class=p>.</span>unroll <span class=nv>%parts#1</span> <span class=p>{</span> full <span class=p>}</span> </span></span></code></pre></div><p>This composition would have been difficult to implement as separate passes since the hypothetical “tiling” and “unrolling” pass would need to somehow differentiate between the parts of the loop produced by the previous pass (both are the same operation, and it is likely undesirable to pollute the operation with pass-specific information). Implementing passes that run the combined transformation would have run into the combinatorial explosion issue due to multiple possible transform compositions or into the need for deep pass parameterization, the ultimate form of which is an ad-hoc dialect to specify which transformations the pass should run. The transform dialect provides a uniform, extensible mechanism for controlling transformations in such cases.</p><p>The Transform dialect is supposed to be consumed by an “interpreter” pass that drives the application of transformations. To ensure extensibility and composability, this pass is not expected to actually perform the transformations specified by the ops. Instead, the transformations are implemented by the transform ops themselves via <code>TransformOpInterface</code>. The pass serves as the entry point, handles the flow of transform operations and takes care of bookkeeping. As such, the Transform dialect does not provide the interpreter pass. Instead, it provides a set of utilities that can be used by clients to define their own interpreter passes or as part of a more complex pass. For example, the mapping between values in the transform IR and operations in the payload IR, or the function that applies the transformations specified by ops in the given block sequentially. Note that a transform op may have regions with further transform ops in them, with the op itself guiding how to dispatch the transformation control flow to those regions. This approach allows clients to decide on the relative location of the transform IR in their input (e.g., nested modules, separate modules, optional regions to certain operations, etc.), register additional transform operations and perform client-specific bookkeeping.</p><h2 id=effects-on-the-infrastructure>Effects on the Infrastructure <a class=headline-hash href=#effects-on-the-infrastructure>¶</a></h2><p>Although scoped to a single dialect, this functionality conceptually belongs to the MLIR infrastructure. It aims to be minimally intrusive and opt-in.</p><p>Some infrastructural components may grow extra functionality to support the transform dialect. In particular, the pattern infrastructure may add extra hooks to identify the “main results” of a transformation or to notify external observers about changes made to certain operations. These are not expected to affect the existing uses of the infrastructure.</p><p>For the sake of reusability, transformations should be implemented as utility functions that are called from the interface methods of transform ops rather than having the methods directly act on the payload IR.</p><h2 id=type-definitions>Type Definitions <a class=headline-hash href=#type-definitions>¶</a></h2><h3 id=affinemapparamtype>AffineMapParamType <a class=headline-hash href=#affinemapparamtype>¶</a></h3><p>Syntax: <code>!transform.affine_map</code></p><p>Transform IR parameter value that can be associated with a list of affine map attributes.</p><h3 id=anyoptype>AnyOpType <a class=headline-hash href=#anyoptype>¶</a></h3><p>Syntax: <code>!transform.any_op</code></p><p>Transform IR handle that can be associated with a list of arbitrary Payload IR operations.</p><h3 id=anyparamtype>AnyParamType <a class=headline-hash href=#anyparamtype>¶</a></h3><p>Syntax: <code>!transform.any_param</code></p><p>Transform IR value that can be associated with a list of parameters of any type.</p><h3 id=anyvaluetype>AnyValueType <a class=headline-hash href=#anyvaluetype>¶</a></h3><p>Syntax: <code>!transform.any_value</code></p><p>Transform IR value that can be associated with a list of Payload IR values.</p><h3 id=operationtype>OperationType <a class=headline-hash href=#operationtype>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>!transform.op< ::llvm::StringRef # operation_name > </code></pre><p>Transform IR handle that can be associated with a list of Payload IR operations with the specified operation name.</p><h4 id=parameters>Parameters: <a class=headline-hash href=#parameters>¶</a></h4><table><thead><tr><th style=text-align:center>Parameter</th><th style=text-align:center>C++ type</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center>operation_name</td><td style=text-align:center><code>::llvm::StringRef</code></td><td>Name of the allowed payload operation</td></tr></tbody></table><h3 id=paramtype>ParamType <a class=headline-hash href=#paramtype>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>!transform.param< ::mlir::Type # type > </code></pre><p>Transform IR value that can be associated with the list of parameters of the given type. Types are currently limited to integers, but may be extended in the future to other types values of which can be contained in attributes.</p><h4 id=parameters-1>Parameters: <a class=headline-hash href=#parameters-1>¶</a></h4><table><thead><tr><th style=text-align:center>Parameter</th><th style=text-align:center>C++ type</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center>type</td><td style=text-align:center><code>::mlir::Type</code></td><td>Underlying type of the parameter</td></tr></tbody></table><h3 id=typeparamtype>TypeParamType <a class=headline-hash href=#typeparamtype>¶</a></h3><p>Syntax: <code>!transform.type</code></p><p>Transform IR parameter value that can be associated with a list of type attributes.</p><h2 id=core-operations>Core Operations <a class=headline-hash href=#core-operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/Transform/IR/TransformOps.td>source</a></p><h3 id=transformalternatives-transformalternativesop><code>transform.alternatives</code> (transform::AlternativesOp) <a class=headline-hash href=#transformalternatives-transformalternativesop>¶</a></h3><p><em>Attempts sequences of transforms until one succeeds</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.alternatives` ($scope^ `:` type($scope))? (`->` type($results)^)? attr-dict-with-keyword regions </code></pre><p>This op may have an arbitrary number of regions, each of which represents a sequence of transform operations to be applied to the same payload IR. The regions are visited in order of appearance, and transforms in them are applied in their respective order of appearance. If one of these transforms fails to apply, the remaining ops in the same region are skipped an the next region is attempted. If all transformations in a region succeed, the remaining regions are skipped and the entire “alternatives” transformation succeeds. If all regions contained a failing transformation, the entire “alternatives” transformation fails.</p><p>It is up to the nested operations to define which errors are “recoverable” (or “silenceable”) and allow another alternatives to be attempted, and which errors should be propagated without attempting the other alternatives.</p><p>The single operand of this operation is the scope in which the alternative transformation sequences are attempted, that is, an operation in the payload IR that contains all the other operations that may be modified by the transformations. The scope operation must be isolated from above. There is no check that the transforms are indeed scoped as their “apply” methods can be arbitrarily complex. Therefore it is the responsibility of the user to ensure that the transforms are scoped correctly, or to produce an irrecoverable error and thus abort the execution without attempting the remaining alternatives. Note that the payload IR outside of the given scope is not necessarily in the valid state, or even accessible to the transformation.</p><p>The changes to the IR within the scope performed by transforms in the failed alternative region are reverted before attempting the next region. Practically, this is achieved by cloning the scope. Therefore it is advised to limit the scope as much as possible and place the most likely alternatives early in the region list. The operation is also isolated from above and requires rediscovering the operations within the given scope to avoid additional handle invalidation. The latter restriction may be lifted in the future.</p><p>Each of the regions may yield transform IR handles. The handles of the first successful alternative region are returned as the results of the “alternatives” op. Therefore, each alternative region must yield the same number of results, which should also match the number and the types of the “alternatives” op results.</p><p>Remark: this op allows one to implement a simple “try” construct as follows:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%result</span> <span class=p>=</span> transform<span class=p>.</span>alternatives <span class=nv>%scope</span> <span class=p>{</span> </span></span><span class=line><span class=cl><span class=nl>^bb0</span><span class=p>(</span><span class=nv>%arg0</span><span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>):</span> </span></span><span class=line><span class=cl> <span class=c>// Try a fallible transformation. </span></span></span><span class=line><span class=cl><span class=c></span> <span class=nv>%0</span> <span class=p>=</span> transform<span class=p>.</span>fallible <span class=nv>%arg0</span> <span class=c>// ... </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// If succeeded, yield the the result of the transformation. </span></span></span><span class=line><span class=cl><span class=c></span> transform<span class=p>.</span>yield <span class=nv>%0</span> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_op </span></span><span class=line><span class=cl><span class=p>},</span> <span class=p>{</span> </span></span><span class=line><span class=cl><span class=nl>^bb0</span><span class=p>(</span><span class=nv>%arg0</span><span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>):</span> </span></span><span class=line><span class=cl> <span class=c>// Otherwise, the second alternative is tried and it always succeeds by </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// returning the original handle. </span></span></span><span class=line><span class=cl><span class=c></span> transform<span class=p>.</span>yield <span class=nv>%arg0</span> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_op </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>Traits: <code>IsolatedFromAbove</code>, <code>PossibleTopLevelTransformOpTrait</code>, <code>SingleBlockImplicitTerminator<::mlir::transform::YieldOp></code>, <code>SingleBlock</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>RegionBranchOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands>Operands: <a class=headline-hash href=#operands>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>scope</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results>Results: <a class=headline-hash href=#results>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>results</code></td><td>variadic of TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformannotate-transformannotateop><code>transform.annotate</code> (transform::AnnotateOp) <a class=headline-hash href=#transformannotate-transformannotateop>¶</a></h3><p><em>Annotates the target operation with an attribute by name</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.annotate` $target $name attr-dict (`=` $param^)?`:` type($target) (`,` type($param)^)? </code></pre><p>Adds an attribute with the given <code>name</code> to the <code>target</code> operation. An optional <code>param</code> handle can be provided to give the attribute a specific value, else a UnitAttr is added. A single attribute will be broadcasted to all target operations, otherwise the attributes will be mapped 1:1 based on the order within the handles.</p><p>Produces a silenceable failure if the length of the parameter payload does not match the length of the target payload. Does not consume the provided handles.</p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes>Attributes: <a class=headline-hash href=#attributes>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>name</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr></table><h4 id=operands-1>Operands: <a class=headline-hash href=#operands-1>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>param</code></td><td>TransformParamTypeInterface instance</td></tr></tbody></table><h3 id=transformapply_patternscanonicalization-transformapplycanonicalizationpatternsop><code>transform.apply_patterns.canonicalization</code> (transform::ApplyCanonicalizationPatternsOp) <a class=headline-hash href=#transformapply_patternscanonicalization-transformapplycanonicalizationpatternsop>¶</a></h3><p><em>Populates canonicalization patterns</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.canonicalization` attr-dict </code></pre><p>This op populates all canonicalization patterns of all loaded dialects in an <code>apply_patterns</code> transform.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_cse-transformapplycommonsubexpressioneliminationop><code>transform.apply_cse</code> (transform::ApplyCommonSubexpressionEliminationOp) <a class=headline-hash href=#transformapply_cse-transformapplycommonsubexpressioneliminationop>¶</a></h3><p><em>Eliminate common subexpressions in the body of the target op</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_cse` `to` $target attr-dict `:` type($target) </code></pre><p>This transform applies common subexpression elimination (CSE) to the body of the targeted op.</p><p>This transform reads the target handle and modifies the payload. Existing handles to operations inside of the targeted op are retained and updated if necessary. Note that this can lead to situations where a handle, that was previously mapped to multiple distinct (but equivalent) operations, is now mapped to the same operation multiple times.</p><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-2>Operands: <a class=headline-hash href=#operands-2>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformapply_conversion_patterns-transformapplyconversionpatternsop><code>transform.apply_conversion_patterns</code> (transform::ApplyConversionPatternsOp) <a class=headline-hash href=#transformapply_conversion_patterns-transformapplyconversionpatternsop>¶</a></h3><p><em>Applies conversion patterns to the body of the targeted op</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_conversion_patterns` `to` $target $patterns (`with` `type_converter` $default_type_converter_region^)? attr-dict `:` type($target) </code></pre><p>This transform applies the specified conversion patterns to the targeted op and all nested ops. By default, this transform applies a “full” dialect conversion. If the <code>partial_conversion</code> unit attribute is present, this transform applies a partial dialect conversion.</p><p>The patterns that should be applied are specified in the first graph region of this op. They must implement the <code>ConversionPatternDescriptorOpInterface</code>. The order in which patterns are applied is unspecified; i.e., the ordering of ops in the region of this op is irrelevant.</p><p>The second, optional graph region contains exactly one op that specifies default type converter that should be used with this dialect conversion. If provided, this op must implement the <code>TypeConverterBuilderOpInterface</code>. Type converters are a property of conversion patterns: each conversion pattern stores the type converter that should be used in its C++ class. Each conversion pattern descriptor can optionally specify a type converter in its <code>getTypeConverter</code> interface method. If no type converter is specified in this method, the default type converter of the dialect conversion is used. Default type converters are useful if the same type converter should be used for multiple sets of conversion patterns. (Patterns that should not use this default type converter specify their own type converter.)</p><p>The <code>legal_ops</code>, <code>illegal_ops</code>, <code>legal_dialects</code>, <code>illegal_dialects</code> attributes specify the conversion target.</p><p>This transform modifies the payload. By default, it consumes the <code>target</code> handle. It does not produce any handles.</p><p>If the <code>preserve_handles</code> attribute is set, this transform does not consume the <code>target</code> handle and instead updates handles based on notifications from a tracking listener that is attached to the dialect conversion, similar to <code>transform.apply_patterns</code>. Only replacements via <code>RewriterBase::replaceOp</code> or <code>replaceOpWithNewOp</code> are considered “payload op replacements”. In contrast to <code>transform.apply_patterns</code>, we allow replacement ops even if the op name has changed. This is because conversion patterns are expected to lower ops to different ops (from a different dialect). More details can be found at the documentation site of <code>TrackingListener</code>.</p><p>This transform produces a silenceable failure if the dialect conversion was unsuccessful or the tracking listener failed to find a replacement op.</p><p>Traits: <code>HasOnlyGraphRegion</code>, <code>NoTerminator</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>SingleBlock</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>RegionKindInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-1>Attributes: <a class=headline-hash href=#attributes-1>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>legal_ops</code></td><td>::mlir::ArrayAttr</td><td>string array attribute</td></tr><tr><td><code>illegal_ops</code></td><td>::mlir::ArrayAttr</td><td>string array attribute</td></tr><tr><td><code>legal_dialects</code></td><td>::mlir::ArrayAttr</td><td>string array attribute</td></tr><tr><td><code>illegal_dialects</code></td><td>::mlir::ArrayAttr</td><td>string array attribute</td></tr><tr><td><code>partial_conversion</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>preserve_handles</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-3>Operands: <a class=headline-hash href=#operands-3>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformapply_dce-transformapplydeadcodeeliminationop><code>transform.apply_dce</code> (transform::ApplyDeadCodeEliminationOp) <a class=headline-hash href=#transformapply_dce-transformapplydeadcodeeliminationop>¶</a></h3><p><em>Eliminate dead operations in the body of the target op</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_dce` `to` $target attr-dict `:` type($target) </code></pre><p>This transform applies dead code elimination (DCE) to the body of the targeted op.</p><p>Note: “transform.apply_patterns” with an empty region can also be used to remove dead ops. However, that op applies additional simplifications such as op folding and region simplification.</p><p>This transform reads the target handle and modifies the payload. Note that this transform may silently remove payload ops from handles.</p><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-4>Operands: <a class=headline-hash href=#operands-4>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformapply_licm-transformapplyloopinvariantcodemotionop><code>transform.apply_licm</code> (transform::ApplyLoopInvariantCodeMotionOp) <a class=headline-hash href=#transformapply_licm-transformapplyloopinvariantcodemotionop>¶</a></h3><p><em>Move loop-invariant code out of a loop-like op</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_licm` `to` $target attr-dict `:` type($target) </code></pre><p>This transform moves side-effect free, loop invariant code out of the targeted loop-like op. The targeted op must implement the <code>LoopLikeOpInterface</code>.</p><p>Note: To move invariant ops from a loop nest, this transform must be applied to each loop of the loop nest, starting with the inner-most loop.</p><p>This transform reads the target handle and modifies the payload.</p><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-5>Operands: <a class=headline-hash href=#operands-5>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformapply_patterns-transformapplypatternsop><code>transform.apply_patterns</code> (transform::ApplyPatternsOp) <a class=headline-hash href=#transformapply_patterns-transformapplypatternsop>¶</a></h3><p><em>Greedily applies patterns to the body of the targeted op</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns` `to` $target $patterns attr-dict `:` type($target) </code></pre><p>This transform greedily applies the specified patterns to the body of the targeted op until a fixpoint was reached. Patterns are not applied to the targeted op itself.</p><p>The patterns that should be applied are specified in the graph region of this op. They must implement the <code>PatternDescriptorOpInterface</code>. The order in which patterns are applied is unspecified; i.e., the ordering of ops in the region of this op is irrelevant.</p><p>If <code>apple_cse</code> is set, the greedy pattern rewrite is interleaved with common subexpression elimination (CSE): both are repeated until a fixpoint is reached.</p><p>This transform only reads the target handle and modifies the payload. If a pattern erases or replaces a tracked op, the mapping is updated accordingly.</p><p>Only replacements via <code>RewriterBase::replaceOp</code> or <code>replaceOpWithNewOp</code> are considered “payload op replacements”. Furthermore, only if the replacement values are defined by the same op and that op has the same type as the original op, the mapping is updated. Otherwise, this transform produces a silenceable failure. More details can be found at the documentation site of <code>TrackingListener</code>.</p><p>This transform also produces a silenceable failure if the pattern application did not converge within the default number of iterations/rewrites of the greedy pattern rewrite driver.</p><p>Traits: <code>HasOnlyGraphRegion</code>, <code>NoTerminator</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>SingleBlock</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>RegionKindInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-2>Attributes: <a class=headline-hash href=#attributes-2>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>apply_cse</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>max_iterations</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>max_num_rewrites</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr></table><h4 id=operands-6>Operands: <a class=headline-hash href=#operands-6>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformapply_registered_pass-transformapplyregisteredpassop><code>transform.apply_registered_pass</code> (transform::ApplyRegisteredPassOp) <a class=headline-hash href=#transformapply_registered_pass-transformapplyregisteredpassop>¶</a></h3><p><em>Applies the specified registered pass or pass pipeline</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_registered_pass` $pass_name `to` $target attr-dict `:` functional-type(operands, results) </code></pre><p>This transform applies the specified pass or pass pipeline to the targeted ops. The name of the pass/pipeline is specified as a string attribute, as set during pass/pipeline registration. Optionally, pass options may be specified as a string attribute. The pass options syntax is identical to the one used with “mlir-opt”.</p><p>This op first looks for a pass pipeline with the specified name. If no such pipeline exists, it looks for a pass with the specified name. If no such pass exists either, this op fails definitely.</p><p>This transform consumes the target handle and produces a new handle that is mapped to the same op. Passes are not allowed to remove/modify the operation that they operate on, so the target op is guaranteed to still exist. The target handle is invalidated because a pass may arbitrarily modify the body of targeted ops.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-3>Attributes: <a class=headline-hash href=#attributes-3>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>pass_name</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr><tr><td><code>options</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr></table><h4 id=operands-7>Operands: <a class=headline-hash href=#operands-7>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-1>Results: <a class=headline-hash href=#results-1>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformapply_conversion_patternsdialect_to_llvm-transformapplytollvmconversionpatternsop><code>transform.apply_conversion_patterns.dialect_to_llvm</code> (transform::ApplyToLLVMConversionPatternsOp) <a class=headline-hash href=#transformapply_conversion_patternsdialect_to_llvm-transformapplytollvmconversionpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_conversion_patterns.dialect_to_llvm` $dialect_name attr-dict </code></pre><p>Collects patterns that convert ops from the specified dialect to LLVM dialect ops. These patterns require an “LLVMTypeConverter”.</p><p>Note: Only dialects that implement the <code>ConvertToLLVMPatternInterface</code> are supported. Any conversion target modifications by interface implementations are currently ignored. The conversion target is fully specified by the enclosing “apply_conversion_patterns” op.</p><p>Interfaces: <code>ConversionPatternDescriptorOpInterface</code></p><h4 id=attributes-4>Attributes: <a class=headline-hash href=#attributes-4>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>dialect_name</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr></table><h3 id=transformcast-transformcastop><code>transform.cast</code> (transform::CastOp) <a class=headline-hash href=#transformcast-transformcastop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.cast` $input attr-dict `:` type($input) `to` type($output) </code></pre><p>Traits: <code>TransformEachOpTrait</code></p><p>Interfaces: <code>CastOpInterface</code>, <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-8>Operands: <a class=headline-hash href=#operands-8>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>input</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-2>Results: <a class=headline-hash href=#results-2>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>output</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformcollect_matching-transformcollectmatchingop><code>transform.collect_matching</code> (transform::CollectMatchingOp) <a class=headline-hash href=#transformcollect_matching-transformcollectmatchingop>¶</a></h3><p><em>Collects all payload ops that match the given named matcher</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.collect_matching` $matcher `in` $root attr-dict `:` functional-type($root, $results) </code></pre><p>Collects operations or other payload IR objects nested under <code>root</code> (inclusive) that match the given matcher expressed as a named sequence. The matcher sequence must accept exactly one argument that it is not allowed to modify. It must yield as many values as this op has results. Each of the yielded values must be associated with exactly one payload object. If any operation in the matcher sequence produces a silenceable failure, the matcher advances to the next payload operation in the walk order without finishing the sequence.</p><p>The i-th result of this operation is constructed by concatenating the i-th yielded payload IR objects of all successful matcher sequence applications. All results are guaranteed to be mapped to the same number of payload IR objects.</p><p>The operation succeeds unless the matcher sequence produced a definite failure for any invocation.</p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>SymbolUserOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-5>Attributes: <a class=headline-hash href=#attributes-5>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>matcher</code></td><td>::mlir::SymbolRefAttr</td><td>symbol reference attribute</td></tr></table><h4 id=operands-9>Operands: <a class=headline-hash href=#operands-9>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>root</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-3>Results: <a class=headline-hash href=#results-3>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>results</code></td><td>variadic of any transform handle or parameter</td></tr></tbody></table><h3 id=transformforeach_match-transformforeachmatchop><code>transform.foreach_match</code> (transform::ForeachMatchOp) <a class=headline-hash href=#transformforeach_match-transformforeachmatchop>¶</a></h3><p><em>Applies named sequences when a named matcher succeeds</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.foreach_match` oilist( `restrict_root` $restrict_root | `flatten_results` $flatten_results ) `in` $root (`,` $forwarded_inputs^)? custom<ForeachMatchSymbols>($matchers, $actions) attr-dict `:` functional-type(operands, results) </code></pre><p>Given a pair of co-indexed lists of transform dialect symbols (such as <code>transform.named_sequence</code>), walks the payload IR associated with the root handle and interprets the symbols as matcher/action pairs by applying the body of the corresponding symbol definition. The symbol from the first list is the matcher part: if it results in a silenceable error, the error is silenced and the next matcher is attempted. Definite failures from any matcher stop the application immediately and are propagated unconditionally. If none of the matchers succeeds, the next payload operation in walk order (post-order at the moment of writing, double check <code>Operation::walk</code>) is matched. If a matcher succeeds, the co-indexed action symbol is applied and the following matchers are not applied to the same payload operation. If the action succeeds, the next payload operation in walk order is matched. If it fails, both silenceable and definite errors are propagated as the result of this op; propagation of silenceable errors is postponed until the end of the walk.</p><p>The matcher symbol must take at least one operand of a type that implements the same transform dialect interface as the <code>root</code> operand (a check is performed at application time to see if the associated payload satisfies the constraints of the actual type), and may take additional operands with a similar type requirement. It must not consume operands as multiple matchers may be applied. The matcher may produce any number of results. The action symbol paired with the matcher must take the same number of arguments as the matcher has results, and these arguments must implement the same transform dialect interfaces, but not necessarily have the exact same type (again, a check is performed at application time to see if the associated payload satisfies the constraints of actual types on both sides).</p><p>The action symbol may have results that are accumulated from all actions and returned from the <code>foreach_match</code> operation on success. Unless the <code>flatten_results</code> attribute is present, each action result must be associated with exactly one payload entity. The actions are expected to only modify payload operations nested in the <code>root</code> payload operations associated with the operand of this transform operation. Furthermore, the actions may not modify operations outside of the currently matched payload operation, e.g., they may not modify sibling or parent operations. If such behavior is desired, the parent must be matched first and the nested operations obtained by traversing the IR from the parent. This is due to the matching being performed as a post-order IR walk.</p><p>This operation consumes the operand and produces a new handle associated with the same payload. This is necessary to trigger invalidation of handles to any of the payload operations nested in the payload operations associated with the operand, as those are likely to be modified by actions.</p><p>By default, the root payload operation associated with the operand is not matched. This is to support the conservative case where applied actions may invalidate the root payload operation. If the optional <code>restrict_root</code> attribute is set, the root operand is guaranteed to not be invalidated by any of the applied actions. In such cases, the root payload operation is also matched. This is useful because matching the root payload operation is a common idiom, when e.g. matching a func.func directly and operations nested under it.</p><p>The operation succeeds if none of the matchers produced a definite failure during application and if all of the applied actions produced success. Note that it also succeeds if all the matchers failed on all payload operations, i.e. failure to apply is not an error. The operation produces a silenceable failure if any applied action produced a silenceable failure. In this case, the resulting handle is associated with an empty payload. The operation produces a definite failure if any of the applied matchers or actions produced a definite failure.</p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>OpAsmOpInterface</code>, <code>SymbolUserOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-6>Attributes: <a class=headline-hash href=#attributes-6>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>restrict_root</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>flatten_results</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>matchers</code></td><td>::mlir::ArrayAttr</td><td>symbol ref array attribute</td></tr><tr><td><code>actions</code></td><td>::mlir::ArrayAttr</td><td>symbol ref array attribute</td></tr></table><h4 id=operands-10>Operands: <a class=headline-hash href=#operands-10>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>root</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>forwarded_inputs</code></td><td>variadic of any transform handle or parameter</td></tr></tbody></table><h4 id=results-4>Results: <a class=headline-hash href=#results-4>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>updated</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>forwarded_outputs</code></td><td>variadic of any transform handle or parameter</td></tr></tbody></table><h3 id=transformforeach-transformforeachop><code>transform.foreach</code> (transform::ForeachOp) <a class=headline-hash href=#transformforeach-transformforeachop>¶</a></h3><p><em>Executes the body for each element of the payload</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.foreach` $targets oilist(`with_zip_shortest` $with_zip_shortest) `:` type($targets) (`->` type($results)^)? $body attr-dict </code></pre><p>Execute the op’s body - its single region block - exactly once per element of the payload associated to a target handle. The body’s transformations are applied in order of appearance until reaching the (implicit) YieldOp terminator.</p><p>Each iteration gets executed by co-indexing the payloads of the arguments and mapping the body’s arguments to these tuples, as though iterating over the zipped together <code>targets</code>. As such, in each iteration, the size of the payload of each of the body’s block arguments is exactly one. The attribute <code>zip_shortest</code> can be used if the targets vary in their number of payloads; this will limit the iterations to only the number of payloads found in the shortest target.</p><p>This op always reads the target handles. Furthermore, it consumes a handle if there is a transform op in the body that consumes the corresponding block argument. Handles can point to ops, values, or parameters.</p><h4 id=return-modes>Return Modes <a class=headline-hash href=#return-modes>¶</a></h4><p>This op produces as many result handles as the body’s terminating YieldOp has operands. For each result, the payloads of the corresponding YieldOp operand are merged and mapped to the same resulting handle.</p><p>If the target handles do not associate payloads of the same size, a silencable failure will be generated.</p><p>During application, if any transformation in the sequence fails, the entire sequence fails immediately with the same failure, leaving the payload IR in a potentially invalid state, i.e., this operation offers no transformation rollback capabilities.</p><p>Traits: <code>SingleBlockImplicitTerminator<::mlir::transform::YieldOp></code>, <code>SingleBlock</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>RegionBranchOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-7>Attributes: <a class=headline-hash href=#attributes-7>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>with_zip_shortest</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-11>Operands: <a class=headline-hash href=#operands-11>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>targets</code></td><td>variadic of any transform handle or parameter</td></tr></tbody></table><h4 id=results-5>Results: <a class=headline-hash href=#results-5>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>results</code></td><td>variadic of any transform handle or parameter</td></tr></tbody></table><h3 id=transformget_consumers_of_result-transformgetconsumersofresult><code>transform.get_consumers_of_result</code> (transform::GetConsumersOfResult) <a class=headline-hash href=#transformget_consumers_of_result-transformgetconsumersofresult>¶</a></h3><p><em>Get handle to the consumers of this operation’s result number</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.get_consumers_of_result` $target `[` $result_number `]` attr-dict `:` functional-type(operands, results) </code></pre><p>The handle defined by this Transform op corresponds to all operations that consume the SSA value defined by the <code>target</code> and <code>result_number</code> arguments. This operation applies to a single payload operation, otherwise it produces a definite failure. The return handle points to the consuming operations operations, which can be empty.</p><p>Traits: <code>NavigationTransformOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-8>Attributes: <a class=headline-hash href=#attributes-8>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>result_number</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr></table><h4 id=operands-12>Operands: <a class=headline-hash href=#operands-12>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-6>Results: <a class=headline-hash href=#results-6>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>consumers</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformget_defining_op-transformgetdefiningop><code>transform.get_defining_op</code> (transform::GetDefiningOp) <a class=headline-hash href=#transformget_defining_op-transformgetdefiningop>¶</a></h3><p><em>Get handle to the defining op of a value</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.get_defining_op` $target attr-dict `:` functional-type(operands, results) </code></pre><p>The handle defined by this Transform op corresponds to the defining op of the targeted value.</p><p>This transform produces a silenceable failure if the targeted value is a block argument.</p><p>Traits: <code>NavigationTransformOpTrait</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-13>Operands: <a class=headline-hash href=#operands-13>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformValueHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-7>Results: <a class=headline-hash href=#results-7>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformget_operand-transformgetoperandop><code>transform.get_operand</code> (transform::GetOperandOp) <a class=headline-hash href=#transformget_operand-transformgetoperandop>¶</a></h3><p><em>Get a handle to the operand(s) of the targeted op</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.get_operand` $target `[`custom<TransformMatchDims>($raw_position_list, $is_inverted, $is_all)`]` attr-dict `:` functional-type(operands, results) </code></pre><p>The handle defined by this Transform op corresponds to the operands of the given <code>target</code> operation specified by the given set of positions. There are three possible modes:</p><ul><li>Position list directly, i.e. <code>%target[0, 1, 2]</code>. This will return the operands at the specified positions.</li><li>Inverted position list, i.e. <code>%target[except(0, 1, 2)]</code>. This will return all operands except those at the given positions.</li><li>All, i.e. <code>%target[all]</code>. This will return all operands of the operation.</li></ul><p>This transform produces a silenceable failure if any of the operand indices exceeds the number of operands in the target. It reads the target handle and produces the result handle.</p><p>Traits: <code>NavigationTransformOpTrait</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-9>Attributes: <a class=headline-hash href=#attributes-9>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>raw_position_list</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>is_inverted</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>is_all</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-14>Operands: <a class=headline-hash href=#operands-14>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-8>Results: <a class=headline-hash href=#results-8>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>TransformValueHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformget_parent_op-transformgetparentop><code>transform.get_parent_op</code> (transform::GetParentOp) <a class=headline-hash href=#transformget_parent_op-transformgetparentop>¶</a></h3><p><em>Gets handles to the closest parent ops</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.get_parent_op` $target attr-dict `:` functional-type(operands, results) </code></pre><p>The handle defined by this Transform op corresponds to the parents of the targeted payload ops (in the same order).</p><p>Requirements that parent ops must fulfill can be optionally specified. In that case for each target op, the closest parent op that fulfills all requirements, is returned.</p><ul><li><code>isolated_from_above</code>: the parent op must be isolated from above</li><li><code>allow_empty_results</code>: get_parent_op is allowed to return an empty list and still succeeds. In such a case, if <code>get_parent_op</code> fails for any operation in the list, the entire transform returns an empty handle.</li><li><code>op_name</code>: the parent op must have the specified name</li><li><code>nth_parent</code>: get the n-th parent of that satisfies the above requirements</li></ul><p>If <code>deduplicate</code> is set, the result handle does not contain any duplicate ops. For example, given the list “(childof(A), childof(B), childof(B), childof(A), childof(B))”, the resulting list will be just “(A, B)”. Note that no other semantic ordering is applied, e.g., “B” may itself be a parent of “A”. This may have an impact on the further transformation applied to the handle produced here.</p><p>If any of the given Payload IR ops has no such suitable parent, then:</p><ul><li>if <code>allow_empty_results</code> is set, the result handle is empty</li><li>otherwise, the transformation produces a silenceable failure.</li></ul><p>Traits: <code>NavigationTransformOpTrait</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-10>Attributes: <a class=headline-hash href=#attributes-10>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>isolated_from_above</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>allow_empty_results</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>op_name</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr><tr><td><code>deduplicate</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>nth_parent</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute whose value is positive</td></tr></table><h4 id=operands-15>Operands: <a class=headline-hash href=#operands-15>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-9>Results: <a class=headline-hash href=#results-9>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>parent</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformget_producer_of_operand-transformgetproducerofoperand><code>transform.get_producer_of_operand</code> (transform::GetProducerOfOperand) <a class=headline-hash href=#transformget_producer_of_operand-transformgetproducerofoperand>¶</a></h3><p><em>Get handle to the producer of this operation’s operand number</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.get_producer_of_operand` $target `[` $operand_number `]` attr-dict `:` functional-type(operands, results) </code></pre><p>The handle defined by this Transform op corresponds to operation that produces the SSA value defined by the <code>target</code> and <code>operand_number</code> arguments. If the origin of the SSA value is not an operations (i.e. it is a block argument), the transform produces a silenceable failure. The return handle points to only the subset of successfully produced computational operations, which can be empty.</p><p>Traits: <code>NavigationTransformOpTrait</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-11>Attributes: <a class=headline-hash href=#attributes-11>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>operand_number</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr></table><h4 id=operands-16>Operands: <a class=headline-hash href=#operands-16>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-10>Results: <a class=headline-hash href=#results-10>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>producer</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformget_result-transformgetresultop><code>transform.get_result</code> (transform::GetResultOp) <a class=headline-hash href=#transformget_result-transformgetresultop>¶</a></h3><p><em>Get a handle to the result(s) of the targeted op</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.get_result` $target `[`custom<TransformMatchDims>($raw_position_list, $is_inverted, $is_all)`]` attr-dict `:` functional-type(operands, results) </code></pre><p>The handle defined by this Transform op correspond to the OpResults of the given <code>target</code> operation. Optionally <code>result_number</code> can be specified to select a specific result.</p><p>This transform fails silently if the targeted operation does not have enough results. It reads the target handle and produces the result handle.</p><p>The handle defined by this Transform op corresponds to the results of the given <code>target</code> operation specified by the given set of positions. There are three possible modes:</p><ul><li>Position list directly, i.e. <code>%target[0, 1, 2]</code>. This will return the results at the specified positions.</li><li>Inverted position list, i.e. <code>%target[except(0, 1, 2)]</code>. This will return all results except those at the given positions.</li><li>All, i.e. <code>%target[all]</code>. This will return all results of the operation.</li></ul><p>This transform produces a silenceable failure if any of the result indices exceeds the number of results returned by the target. It reads the target handle and produces the result handle.</p><p>Traits: <code>NavigationTransformOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-12>Attributes: <a class=headline-hash href=#attributes-12>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>raw_position_list</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>is_inverted</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>is_all</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-17>Operands: <a class=headline-hash href=#operands-17>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-11>Results: <a class=headline-hash href=#results-11>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>TransformValueHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformget_type-transformgettypeop><code>transform.get_type</code> (transform::GetTypeOp) <a class=headline-hash href=#transformget_type-transformgettypeop>¶</a></h3><p><em>Get a parameter containing the type of the given value</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.get_type` (`elemental` $elemental^)? $value attr-dict `:`functional-type(operands, results) </code></pre><p>This operation creates a new Transform parameter containing the type(s) of the value(s) associated with the operand handle.</p><p>This transform never fails.</p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-13>Attributes: <a class=headline-hash href=#attributes-13>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>elemental</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-18>Operands: <a class=headline-hash href=#operands-18>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>value</code></td><td>TransformValueHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-12>Results: <a class=headline-hash href=#results-12>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>type_param</code></td><td>TransformParamTypeInterface instance</td></tr></tbody></table><h3 id=transforminclude-transformincludeop><code>transform.include</code> (transform::IncludeOp) <a class=headline-hash href=#transforminclude-transformincludeop>¶</a></h3><p><em>Includes a named transform sequence</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.include` $target `failures` `(` $failure_propagation_mode `)``(` $operands `)` attr-dict `:` functional-type($operands, $results) </code></pre><p>The application of this transform operation is equivalent to applying the operations contained in the named transform sequence with operands being remapped to block arguments. The behavior of the operation when a transformation in the included named sequence produces a silenceable error is controlled by the <code>failure_propagation_mode</code> attribute. When set to <code>propagate</code>, the failure of any nested transformation in the sequence implies immediate failure of the entire sequence with a silenceable error, and no further transformation is attempted. When set to <code>suppress</code>, silenceable errors in nested operations are ignored and further transformations are applied. Beware that even silenceable errors may leave the payload IR in a state unsuitable for further transformations. It is the responsibility of the user to ensure the following transformations are robust enough when errors are suppressed. Definite errors are propagated immediately regardless of the mode. The objects associated with the results of this operation are the same as those associated with the operands of the <code>transform.yield</code> in the referenced named sequence.</p><p>Interfaces: <code>CallOpInterface</code>, <code>MatchOpInterface</code>, <code>MemoryEffectOpInterface</code>, <code>SymbolUserOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-14>Attributes: <a class=headline-hash href=#attributes-14>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>target</code></td><td>::mlir::SymbolRefAttr</td><td>symbol reference attribute</td></tr><tr><td><code>failure_propagation_mode</code></td><td>::mlir::transform::FailurePropagationModeAttr</td><td><details><summary>Silenceable error propagation policy</summary><p>Enum cases:</p><ul><li>propagate (<code>Propagate</code>)</li><li>suppress (<code>Suppress</code>)</li></ul></details></td></tr></table><h4 id=operands-19>Operands: <a class=headline-hash href=#operands-19>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>operands</code></td><td>variadic of any transform handle or parameter</td></tr></tbody></table><h4 id=results-13>Results: <a class=headline-hash href=#results-13>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>results</code></td><td>variadic of any transform handle or parameter</td></tr></tbody></table><h3 id=transformmatchoperation_empty-transformmatchoperationemptyop><code>transform.match.operation_empty</code> (transform::MatchOperationEmptyOp) <a class=headline-hash href=#transformmatchoperation_empty-transformmatchoperationemptyop>¶</a></h3><p><em>Matches if the handle is not associated to any op</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.operation_empty` $operand_handle attr-dict `:` type($operand_handle) </code></pre><p>Succeeds if the handle is not associated to any op.</p><p>Traits: <code>AtMostOneOpMatcher</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-20>Operands: <a class=headline-hash href=#operands-20>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>operand_handle</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformmatchoperation_name-transformmatchoperationnameop><code>transform.match.operation_name</code> (transform::MatchOperationNameOp) <a class=headline-hash href=#transformmatchoperation_name-transformmatchoperationnameop>¶</a></h3><p><em>Matches a single operation of one of the given kinds</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.operation_name` $operand_handle $op_names attr-dict `:` type($operand_handle) </code></pre><p>Succeeds if the operation associated with the operand handle has one of the given operation names. Produces a silenceable failure otherwise.</p><p>If more than one payload operation is associated with the operand handle, produces a definite failure.</p><p>Traits: <code>SingleOpMatcher</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-15>Attributes: <a class=headline-hash href=#attributes-15>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>op_names</code></td><td>::mlir::ArrayAttr</td><td>string array attribute</td></tr></table><h4 id=operands-21>Operands: <a class=headline-hash href=#operands-21>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>operand_handle</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformmatchparamcmpi-transformmatchparamcmpiop><code>transform.match.param.cmpi</code> (transform::MatchParamCmpIOp) <a class=headline-hash href=#transformmatchparamcmpi-transformmatchparamcmpiop>¶</a></h3><p><em>Matches if two parameter lists are associated with the same value</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.param.cmpi` $predicate $param `,` $reference attr-dict `:` type($param) </code></pre><p>Succeeds if all of the co-indexed values associated with the given parameters relate as specified by the predicate (greater than, less than, equal to, or their combinations). Comparison treats all values as signed. Produces a silenceable failure otherwise.</p><p>Traits: <code>SameTypeOperands</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-16>Attributes: <a class=headline-hash href=#attributes-16>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>predicate</code></td><td>::mlir::transform::MatchCmpIPredicateAttr</td><td><details><summary>allowed 32-bit signless integer cases: 0, 1, 2, 3, 4, 5</summary><p>Enum cases:</p><ul><li>eq (<code>eq</code>)</li><li>ne (<code>ne</code>)</li><li>lt (<code>lt</code>)</li><li>le (<code>le</code>)</li><li>gt (<code>gt</code>)</li><li>ge (<code>ge</code>)</li></ul></details></td></tr></table><h4 id=operands-22>Operands: <a class=headline-hash href=#operands-22>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>param</code></td><td>TransformParamTypeInterface instance</td></tr><tr><td style=text-align:center><code>reference</code></td><td>TransformParamTypeInterface instance</td></tr></tbody></table><h3 id=transformmerge_handles-transformmergehandlesop><code>transform.merge_handles</code> (transform::MergeHandlesOp) <a class=headline-hash href=#transformmerge_handles-transformmergehandlesop>¶</a></h3><p><em>Merges handles into one pointing to the union of payload ops</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.merge_handles` (`deduplicate` $deduplicate^)? $handles attr-dict `:` type($result) </code></pre><p>Creates a new Transform IR handle value that points to the same Payload IR operations/values/parameters as the operand handles. The Payload IR elements are listed in the same order as they are in the operand handles, grouped by operand handle, e.g., all Payload IR associated with the first handle comes first, then all Payload IR associated with the second handle and so on. If <code>deduplicate</code> is set, do not add the given Payload IR operation, value, or parameter more than once to the final list regardless of it coming from the same or different handles. Consumes the operands and produces a new handle.</p><p>Traits: <code>SameOperandsAndResultType</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-17>Attributes: <a class=headline-hash href=#attributes-17>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>deduplicate</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-23>Operands: <a class=headline-hash href=#operands-23>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>handles</code></td><td>variadic of any transform handle or parameter</td></tr></tbody></table><h4 id=results-14>Results: <a class=headline-hash href=#results-14>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>any transform handle or parameter</td></tr></tbody></table><h3 id=transformnamed_sequence-transformnamedsequenceop><code>transform.named_sequence</code> (transform::NamedSequenceOp) <a class=headline-hash href=#transformnamed_sequence-transformnamedsequenceop>¶</a></h3><p><em>Named transform sequence that can be included elsewhere</em></p><p>Defines a named (callable, function-like) sequence of other Transform dialect operations that can be included using <code>transform.include</code> as part of another Transform dialect construct. This sequence is not processed immediately but rather dispatched to when the inclusion is processed. The arguments and results can be used to communicate a subset of mapping into the named sequence. The sequence must consist of a single block and end with a <code>transform.yield</code> terminator. The operands of the terminator become the results of the <code>transform.include</code>.</p><p>When dispatched to, the operations in the named sequence are executed one by one, similarly to the regular unnamed sequence. The failure propagation mode is specified on the <code>transform.include</code>. Different inclusions may use different failure propagation modes. This transform operation always succeeds by itself, but the inclusion may fail if any of the operations fail.</p><p>Named sequences can only appear at the top-level of the Transform dialect nesting structure. That is, they cannot be nested in other Transform dialect operations. Furthermore, one of the ancestors must have the <code>SymbolTable</code> trait and have the <code>transform.with_named_sequence</code> attribute attached.</p><p>Named sequences may include other named sequences via <code>transform.include</code>, but recursion is <em>not</em> allowed.</p><p>Traits: <code>IsolatedFromAbove</code></p><p>Interfaces: <code>CallableOpInterface</code>, <code>FunctionOpInterface</code>, <code>MemoryEffectOpInterface</code>, <code>Symbol</code>, <code>TransformOpInterface</code></p><h4 id=attributes-18>Attributes: <a class=headline-hash href=#attributes-18>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>sym_name</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr><tr><td><code>function_type</code></td><td>::mlir::TypeAttr</td><td>function type attribute</td></tr><tr><td><code>sym_visibility</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr><tr><td><code>arg_attrs</code></td><td>::mlir::ArrayAttr</td><td>Array of dictionary attributes</td></tr><tr><td><code>res_attrs</code></td><td>::mlir::ArrayAttr</td><td>Array of dictionary attributes</td></tr></table><h3 id=transformnum_associations-transformnumassociationsop><code>transform.num_associations</code> (transform::NumAssociationsOp) <a class=headline-hash href=#transformnum_associations-transformnumassociationsop>¶</a></h3><p><em>Returns the number of payload objects associated with the argument</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.num_associations` $handle attr-dict `:` functional-type(operands, results) </code></pre><p>Given an argument, handle or parameter, returns a new parameter associated with a single 64-bit number that corresponds to the number of payload objects (operations or values for a handle, attributes for a parameter) associated with the argument.</p><p>Always succeeds.</p><p>Traits: <code>ParamProducerTransformOpTrait</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-24>Operands: <a class=headline-hash href=#operands-24>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>handle</code></td><td>any transform handle or parameter</td></tr></tbody></table><h4 id=results-15>Results: <a class=headline-hash href=#results-15>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>num</code></td><td>TransformParamTypeInterface instance</td></tr></tbody></table><h3 id=transformparamconstant-transformparamconstantop><code>transform.param.constant</code> (transform::ParamConstantOp) <a class=headline-hash href=#transformparamconstant-transformparamconstantop>¶</a></h3><p><em>Produces a new transform dialect parameter value associated with the given attribute</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.param.constant` $value attr-dict `->` type($param) </code></pre><p>Produces a new transform dialect parameter associated with the singleton list containing the given attribute. The operation itself always succeeds, but the general association check may fail if the parameter type does not accept the given kind of attribute as valid.</p><p>Traits: <code>ParamProducerTransformOpTrait</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-19>Attributes: <a class=headline-hash href=#attributes-19>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>value</code></td><td>::mlir::Attribute</td><td>any attribute</td></tr></table><h4 id=results-16>Results: <a class=headline-hash href=#results-16>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>param</code></td><td>TransformParamTypeInterface instance</td></tr></tbody></table><h3 id=transformprint-transformprintop><code>transform.print</code> (transform::PrintOp) <a class=headline-hash href=#transformprint-transformprintop>¶</a></h3><p><em>Dump each payload op</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.print` $target attr-dict (`:` type($target)^)? </code></pre><p>Prints each payload op that is associated with the <code>target</code> operand to <code>stdout</code>. It also prints the <code>name</code> string attribute. If no target is specified, the top-level op is dumped.</p><p>This op is useful for printf-style debugging.</p><p>Supported printing flag attributes:</p><ul><li><code>assume_verified</code> – skips verification when the unit attribute is specified. This improves performace but may lead to crashes and unexpected behavior when the printed payload op is invalid.</li><li><code>use_local_scope</code> – prints in local scope when the unit attribute is specified. This improves performance but may not be identical to printing within the full module.</li><li><code>skip_regions</code> – does not print regions of operations when the unit attribute is specified.</li></ul><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-20>Attributes: <a class=headline-hash href=#attributes-20>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>name</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr><tr><td><code>assume_verified</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>use_local_scope</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>skip_regions</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-25>Operands: <a class=headline-hash href=#operands-25>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformreplicate-transformreplicateop><code>transform.replicate</code> (transform::ReplicateOp) <a class=headline-hash href=#transformreplicate-transformreplicateop>¶</a></h3><p><em>Lists payload ops multiple times in the new handle</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.replicate` `num` `(` $pattern `)` $handles attr-dict `:` type($pattern) `,` type($handles) </code></pre><p>Produces a new handle associated with a list of payload IR ops that is computed by repeating the list of payload IR ops associated with the operand handle as many times as the “pattern” handle has associated operations. For example, if pattern is associated with [op1, op2] and the operand handle is associated with [op3, op4, op5], the resulting handle will be associated with [op3, op4, op5, op3, op4, op5].</p><p>This transformation is useful to “align” the sizes of payload IR lists before a transformation that expects, e.g., identically-sized lists. For example, a transformation may be parameterized by same notional per-target size computed at runtime and supplied as another handle, the replication allows this size to be computed only once and used for every target instead of replicating the computation itself.</p><p>Note that it is undesirable to pass a handle with duplicate operations to an operation that consumes the handle. Handle consumption often indicates that the associated payload IR ops are destroyed, so having the same op listed more than once will lead to double-free. Single-operand MergeHandlesOp may be used to deduplicate the associated list of payload IR ops when necessary. Furthermore, a combination of ReplicateOp and MergeHandlesOp can be used to construct arbitrary lists with repetitions.</p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-26>Operands: <a class=headline-hash href=#operands-26>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>pattern</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>handles</code></td><td>variadic of any transform handle or parameter</td></tr></tbody></table><h4 id=results-17>Results: <a class=headline-hash href=#results-17>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>replicated</code></td><td>variadic of any transform handle or parameter</td></tr></tbody></table><h3 id=transformselect-transformselectop><code>transform.select</code> (transform::SelectOp) <a class=headline-hash href=#transformselect-transformselectop>¶</a></h3><p><em>Select payload ops by name</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.select` $op_name `in` $target attr-dict `:` functional-type(operands, results) </code></pre><p>The handle defined by this Transform op corresponds to all operations among <code>target</code> that have the specified properties. Currently the following properties are supported:</p><ul><li><code>op_name</code>: The op must have the specified name.</li></ul><p>The result payload ops are in the same relative order as the targeted ops. This transform op reads the <code>target</code> handle and produces the <code>result</code> handle. It reads the payload, but does not modify it.</p><p>Traits: <code>NavigationTransformOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-21>Attributes: <a class=headline-hash href=#attributes-21>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>op_name</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr></table><h4 id=operands-27>Operands: <a class=headline-hash href=#operands-27>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-18>Results: <a class=headline-hash href=#results-18>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformsequence-transformsequenceop><code>transform.sequence</code> (transform::SequenceOp) <a class=headline-hash href=#transformsequence-transformsequenceop>¶</a></h3><p><em>Contains a sequence of other transform ops to apply</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.sequence` custom<SequenceOpOperands>($root, type($root), $extra_bindings, type($extra_bindings)) (`->` type($results)^)? `failures` `(` $failure_propagation_mode `)` attr-dict-with-keyword regions </code></pre><p>The transformations indicated by the sequence are applied in order of their appearance. Each value produced by a transformation within the sequence corresponds to a group of operations or values in the payload IR, or to a group of parameters, depending on the type of the value. The behavior of the operation when a nested transformation produces a silenceable error is controlled by the <code>failure_propagation_mode</code> attribute. When set to <code>propagate</code>, the failure of any nested transformation in the sequence implies immediate failure of the entire sequence with a silenceable error, and no further transformation is attempted. When set to <code>suppress</code>, silenceable errors in nested operations are ignored and further transformations are applied. Beware that even silenceable errors may leave the payload IR in a state unsuitable for further transformations. It is the responsibility of the caller to ensure the following transformations are robust enough when errors are suppressed. Definite errors reported by nested transformations abort the sequence regardless of the propagation mode. The set of modes may be extended in the future, e.g., to collect silenceable errors and report them after attempting all transformations in the sequence.</p><p>The entry block of this operation has a single argument that maps to either the operand if provided or the top-level container operation of the payload IR, typically the root operation of the pass interpreting the transform dialect. Operand omission is only allowed for sequences not contained in another sequence.</p><p>The type of the block argument must match the type of the operand. If the sequence is a top-level transform (without an operand), it can be used for matching operations if the specified type within the top-level container payload IR (including the container op itself). E.g.:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>transform<span class=p>.</span>sequence failures<span class=p>(</span>propagate<span class=p>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl><span class=nl>^bb1</span><span class=p>(</span><span class=nv>%arg1</span><span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>):</span> </span></span><span class=line><span class=cl> <span class=c>// %arg1 is mapped to the top-level container of the payload IR, which is </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// typically a module </span></span></span><span class=line><span class=cl><span class=c></span><span class=p>}</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl>transform<span class=p>.</span>sequence failures<span class=p>(</span>propagate<span class=p>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl><span class=nl>^bb1</span><span class=p>(</span><span class=nv>%arg1</span><span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>op<span class=p><</span><span class=s>"func.func>"</span><span class=p>):</span> </span></span><span class=line><span class=cl> <span class=c>// %arg1 is mapped to all "func.func" ops within and including the </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// top-level container of the payload IR. Nested operations that have the </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// specified op type are not included. </span></span></span><span class=line><span class=cl><span class=c></span><span class=p>}</span> </span></span></code></pre></div><p>The body of the sequence terminates with an implicit or explicit <code>transform.yield</code> op. The operands of the terminator are returned as the results of the sequence op.</p><p>Traits: <code>AttrSizedOperandSegments</code>, <code>PossibleTopLevelTransformOpTrait</code>, <code>SingleBlockImplicitTerminator<::mlir::transform::YieldOp></code>, <code>SingleBlock</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectOpInterface</code>, <code>OpAsmOpInterface</code>, <code>RegionBranchOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-22>Attributes: <a class=headline-hash href=#attributes-22>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>failure_propagation_mode</code></td><td>::mlir::transform::FailurePropagationModeAttr</td><td><details><summary>Silenceable error propagation policy</summary><p>Enum cases:</p><ul><li>propagate (<code>Propagate</code>)</li><li>suppress (<code>Suppress</code>)</li></ul></details></td></tr></table><h4 id=operands-28>Operands: <a class=headline-hash href=#operands-28>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>root</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>extra_bindings</code></td><td>variadic of any transform handle or parameter</td></tr></tbody></table><h4 id=results-19>Results: <a class=headline-hash href=#results-19>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>results</code></td><td>variadic of TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformsplit_handle-transformsplithandleop><code>transform.split_handle</code> (transform::SplitHandleOp) <a class=headline-hash href=#transformsplit_handle-transformsplithandleop>¶</a></h3><p><em>Splits a handle of payload ops into handles with a single op</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.split_handle` $handle attr-dict `:` functional-type(operands, results) </code></pre><p>Splits <code>handle</code> into one or multiple handles, as specified by the number of results of this operation. <code>handle</code> should be mapped to as many payload ops as there are results. Otherwise, this transform will fail produces a silenceable failure by default. Each result handle is mapped to exactly one payload op. The order of the payload ops is preserved, i.e., the i-th payload op is mapped to the i-th result handle.</p><p>This operation is useful for ensuring a statically known number of operations are tracked by the source <code>handle</code> and to extract them into individual handles that can be further manipulated in isolation.</p><p>If there are more payload ops than results, the remaining ops are mapped to the result with index <code>overflow_result</code>. If no <code>overflow_result</code> is specified, the transform produces a silenceable failure.</p><p>If there are fewer payload ops than results, the transform produces a silenceable failure if <code>fail_on_payload_too_small</code> is set to “true”. Otherwise, it succeeds and the remaining result handles are not mapped to any op. It also succeeds if <code>handle</code> is empty and <code>pass_through_empty_handle</code> is set to “true”, regardless of <code>fail_on_payload_too_small</code>.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-23>Attributes: <a class=headline-hash href=#attributes-23>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>pass_through_empty_handle</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr><tr><td><code>fail_on_payload_too_small</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr><tr><td><code>overflow_result</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr></table><h4 id=operands-29>Operands: <a class=headline-hash href=#operands-29>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>handle</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-20>Results: <a class=headline-hash href=#results-20>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>results</code></td><td>variadic of TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformverify-transformverifyop><code>transform.verify</code> (transform::VerifyOp) <a class=headline-hash href=#transformverify-transformverifyop>¶</a></h3><p><em>Verifies the targeted ops</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.verify` $target attr-dict `:` type($target) </code></pre><p>This transform verifies the targeted ops. If at least one op fails to verify, the transform produces a definite failure.</p><p>Note: This op was designed for debugging purposes and should be used like an assertion. It is intentional that this op produces a definite failure and not a silenceable one. Correctness of the program should not depend on this op.</p><p>This transform reads the target handle.</p><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-30>Operands: <a class=headline-hash href=#operands-30>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformyield-transformyieldop><code>transform.yield</code> (transform::YieldOp) <a class=headline-hash href=#transformyield-transformyieldop>¶</a></h3><p><em>Yields operation handles from a transform IR region</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.yield` operands attr-dict (`:` type($operands)^)? </code></pre><p>This terminator operation yields operation handles from regions of the transform IR ops back to the containing op. It is not itself associated with any transformation on the payload IR and is used for flow purposes only.</p><p>Traits: <code>Terminator</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code></p><h4 id=operands-31>Operands: <a class=headline-hash href=#operands-31>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>operands</code></td><td>variadic of any transform handle or parameter</td></tr></tbody></table><h2 id=affine-transform-operations>Affine Transform Operations <a class=headline-hash href=#affine-transform-operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/Affine/TransformOps/AffineTransformOps.td>source</a></p><h3 id=transformaffinesimplify_bounded_affine_ops-transformsimplifyboundedaffineopsop><code>transform.affine.simplify_bounded_affine_ops</code> (transform::SimplifyBoundedAffineOpsOp) <a class=headline-hash href=#transformaffinesimplify_bounded_affine_ops-transformsimplifyboundedaffineopsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.affine.simplify_bounded_affine_ops` $target `with` `[` ($bounded_values^ `:` type($bounded_values))? `]` `within` $lower_bounds `and` $upper_bounds attr-dict `:` type($target) </code></pre><p>Simplify the targeted affine.min / affine.max ops given the supplied lower and upper bounds for values that may be used as target op operands.</p><p>Example:</p><pre tabindex=0><code>%0 = transform.structured.match ops{["affine.min", "affine.max"]} in %arg1 %1 = transform.structured.match ops{["gpu.lane_id"]} in %arg1 transform.affine.simplify_bounded_affine_ops %0 with [%1] within [0] and [32] // Multiple bounds can be specified. transform.affine.simplify_bounded_affine_ops %0 with [%1, %2] within [0, 5] and [32, 50] </code></pre><p>Bounded op handles (<code>%1</code> and `%2) must be mapped to ops that have a single result of index type. The sets of target ops and bounded ops must not overlap.</p><h4 id=return-modes-1>Return modes <a class=headline-hash href=#return-modes-1>¶</a></h4><p>Target ops must be affine.min or affine.max ops. This transform consumes the target handle and does not produce any handle. It reads the bounded op handles.</p><p>TODO: Support affine.apply targets. TODO: Allow mixed PDL_Operation/int64_t for lower_bounds and upper_bounds.</p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-24>Attributes: <a class=headline-hash href=#attributes-24>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>lower_bounds</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>upper_bounds</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr></table><h4 id=operands-32>Operands: <a class=headline-hash href=#operands-32>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>bounded_values</code></td><td>variadic of TransformHandleTypeInterface instance</td></tr></tbody></table><h2 id=bufferization-transform-operations>Bufferization Transform Operations <a class=headline-hash href=#bufferization-transform-operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/Bufferization/TransformOps/BufferizationTransformOps.td>source</a></p><h3 id=transformbufferizationbuffer_loop_hoisting-transformbufferloophoistingop><code>transform.bufferization.buffer_loop_hoisting</code> (transform::BufferLoopHoistingOp) <a class=headline-hash href=#transformbufferizationbuffer_loop_hoisting-transformbufferloophoistingop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.bufferization.buffer_loop_hoisting` $target attr-dict `:` type($target) </code></pre><p>Hoist buffer allocations (“memref.alloc” and “memref.alloca”) from loops within the targeted op. This transform assumes that there are no buffer deallocation ops in the IR.</p><p>This transform reads the <code>target</code> handle and modifies the payload.</p><p>Traits: <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-33>Operands: <a class=headline-hash href=#operands-33>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformbufferizationeliminate_empty_tensors-transformeliminateemptytensorsop><code>transform.bufferization.eliminate_empty_tensors</code> (transform::EliminateEmptyTensorsOp) <a class=headline-hash href=#transformbufferizationeliminate_empty_tensors-transformeliminateemptytensorsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.bufferization.eliminate_empty_tensors` $target attr-dict `:` type($target) </code></pre><p>Try to eliminate all <code>tensor.empty</code> ops within the targeted op by replacing them with another destination tensor.</p><p>“tensor.empty” ops cannot be bufferized. They can either be converted to “bufferization.alloc_tensor” or replaced with another tensor (via this transform). “tensor.empty” does not specify the contents of the returned tensor so their results can be replaced with arbitrary tensor values as long as the dimensions match.</p><p>This transformation looks for subset ops that insert a tensor that originates from a “tensor.empty” (as per the reverse use-def chain). Such “tensor.empty” ops are replaced with the destination subset.</p><p>Example:</p><pre tabindex=0><code>%0 = tensor.empty() : tensor<5xf32> %1 = linalg.fill ... outs(%0) %2 = tensor.insert_slice %1 into %t[1][5][1] </code></pre><p>Is rewritten with:</p><pre tabindex=0><code>%0 = tensor.extract_slice %t[1][5][1] %1 = linalg.fill ... outs(%0) %2 = tensor.insert_slice %1 into %t[1][5][1] </code></pre><p>In the above example, the subset op is “tensor.insert_slice”. When tracing back the reverse use-def chain of a the source, we end up at a “tensor.empty” op.</p><p>The above example can bufferize without an allocation (in the absence of other conflicts) because there is no longer a <code>tensor.empty</code> op.</p><p>See <code>-eliminate-empty-tensors</code> for more details.</p><h4 id=return-modes-2>Return modes <a class=headline-hash href=#return-modes-2>¶</a></h4><p>This transform reads the target handle and modifies the payload. It does not produce any handle.</p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-34>Operands: <a class=headline-hash href=#operands-34>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformbufferizationempty_tensor_to_alloc_tensor-transformemptytensortoalloctensorop><code>transform.bufferization.empty_tensor_to_alloc_tensor</code> (transform::EmptyTensorToAllocTensorOp) <a class=headline-hash href=#transformbufferizationempty_tensor_to_alloc_tensor-transformemptytensortoalloctensorop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.bufferization.empty_tensor_to_alloc_tensor` $target attr-dict `:` functional-type(operands, results) </code></pre><p>Replace a tensor.empty with a bufferization.tensor_alloc.</p><h4 id=return-modes-3>Return modes <a class=headline-hash href=#return-modes-3>¶</a></h4><p>This operation consumes the <code>target</code> handle and produces the <code>transformed</code> handle. <code>target</code> is expected to be a <code>tensor.empty</code> operation. The transform always succeeds.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-35>Operands: <a class=headline-hash href=#operands-35>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>Transform IR handle to tensor.empty operations</td></tr></tbody></table><h4 id=results-21>Results: <a class=headline-hash href=#results-21>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>Transform IR handle to bufferization.alloc_tensor operations</td></tr></tbody></table><h3 id=transformbufferizationone_shot_bufferize-transformoneshotbufferizeop><code>transform.bufferization.one_shot_bufferize</code> (transform::OneShotBufferizeOp) <a class=headline-hash href=#transformbufferizationone_shot_bufferize-transformoneshotbufferizeop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.bufferization.one_shot_bufferize` (`layout` `{` $function_boundary_type_conversion^ `}`)? $target attr-dict `:` functional-type($target, results) </code></pre><p>Indicates that the given <code>target</code> op should be bufferized with One-Shot Bufferize. The bufferization can be configured with various attributes that corresponding to options in <code>BufferizationOptions</code> and the <code>one-shot-bufferize</code> pass. More information can be found in the pass documentation.</p><p>The targeted ops must be modules or functions. This is because there is always a single, bufferized replacement op for such targets.</p><p>Note: Only ops that implement <code>BufferizableOpInterface</code> are bufferized. All other ops are ignored if <code>allow_unknown_ops</code>. If <code>allow_unknown_ops</code> is unset, this transform fails when an unknown/non-bufferizable op is found. Many ops implement <code>BufferizableOpInterface</code> via an external model. These external models must be registered when applying this transform op; otherwise, said ops would be considered non-bufferizable.</p><h4 id=return-modes-4>Return modes <a class=headline-hash href=#return-modes-4>¶</a></h4><p>This operation consumes the <code>target</code> handle and produces the <code>transformed</code> handle.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-25>Attributes: <a class=headline-hash href=#attributes-25>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>function_boundary_type_conversion</code></td><td>::mlir::bufferization::LayoutMapOptionAttr</td><td><details><summary>option for map layout</summary><p>Enum cases:</p><ul><li>InferLayoutMap (<code>InferLayoutMap</code>)</li><li>IdentityLayoutMap (<code>IdentityLayoutMap</code>)</li><li>FullyDynamicLayoutMap (<code>FullyDynamicLayoutMap</code>)</li></ul></details></td></tr><tr><td><code>allow_return_allocs_from_loops</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr><tr><td><code>allow_unknown_ops</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr><tr><td><code>bufferize_function_boundaries</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr><tr><td><code>dump_alias_sets</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr><tr><td><code>test_analysis_only</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr><tr><td><code>print_conflicts</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr><tr><td><code>check_parallel_regions</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr><tr><td><code>memcpy_op</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr></table><h4 id=operands-36>Operands: <a class=headline-hash href=#operands-36>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-22>Results: <a class=headline-hash href=#results-22>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h2 id=debug-transform-operations>Debug Transform Operations <a class=headline-hash href=#debug-transform-operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/Transform/DebugExtension/DebugExtensionOps.td>source</a></p><h3 id=transformdebugemit_param_as_remark-transformdebugemitparamasremarkop><code>transform.debug.emit_param_as_remark</code> (transform::DebugEmitParamAsRemarkOp) <a class=headline-hash href=#transformdebugemit_param_as_remark-transformdebugemitparamasremarkop>¶</a></h3><p><em>Prints the parameter as a diagnostic remark</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.debug.emit_param_as_remark` $param (`,` $message^)? (`at` $anchor^)?attr-dict `:` type($param) (`,` type($anchor)^)? </code></pre><p>This operation emits a diagnostic remark containing the string form of the attributes associated with the parameter provided as attribute. It takes as optional arguments:</p><ul><li>an additional message text to prepend;</li><li>a handle pointing to operations the location of which will be used to emit the diagnostic; if multiple operations are associated, the diagnostic is emitted for all of their respective locations.</li></ul><p>This operation always succeeds.</p><p>Traits: <code>NavigationTransformOpTrait</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-26>Attributes: <a class=headline-hash href=#attributes-26>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>message</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr></table><h4 id=operands-37>Operands: <a class=headline-hash href=#operands-37>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>param</code></td><td>TransformParamTypeInterface instance</td></tr><tr><td style=text-align:center><code>anchor</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformdebugemit_remark_at-transformdebugemitremarkatop><code>transform.debug.emit_remark_at</code> (transform::DebugEmitRemarkAtOp) <a class=headline-hash href=#transformdebugemit_remark_at-transformdebugemitremarkatop>¶</a></h3><p><em>Print a message as diagnostic remark attached to payload</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.debug.emit_remark_at` $at `,` $message attr-dict `:` type($at) </code></pre><p>This operation emits a diagnostic remark with the given message at the location of each payload object associated with the argument. The argument may be an operation or a value handle.</p><p>This operation always succeeds.</p><p>Traits: <code>NavigationTransformOpTrait</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-27>Attributes: <a class=headline-hash href=#attributes-27>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>message</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr></table><h4 id=operands-38>Operands: <a class=headline-hash href=#operands-38>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>at</code></td><td>any transform handle</td></tr></tbody></table><h2 id=irdl-extension-transform-operations>IRDL (extension) Transform Operations <a class=headline-hash href=#irdl-extension-transform-operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/Transform/IRDLExtension/IRDLExtensionOps.td>source</a></p><h3 id=transformirdlcollect_matching-transformirdlcollectmatchingop><code>transform.irdl.collect_matching</code> (transform::IRDLCollectMatchingOp) <a class=headline-hash href=#transformirdlcollect_matching-transformirdlcollectmatchingop>¶</a></h3><p><em>Finds ops that match the IRDL definition without registering them.</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.irdl.collect_matching` `in` $root `:` functional-type(operands, results) attr-dict-with-keyword regions </code></pre><p>Traits: <code>NoTerminator</code>, <code>SymbolTable</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-39>Operands: <a class=headline-hash href=#operands-39>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>root</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-23>Results: <a class=headline-hash href=#results-23>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>matched</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h2 id=func-transform-operations>Func Transform Operations <a class=headline-hash href=#func-transform-operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/Func/TransformOps/FuncTransformOps.td>source</a></p><h3 id=transformapply_conversion_patternsfuncfunc_to_llvm-transformapplyfunctollvmconversionpatternsop><code>transform.apply_conversion_patterns.func.func_to_llvm</code> (transform::ApplyFuncToLLVMConversionPatternsOp) <a class=headline-hash href=#transformapply_conversion_patternsfuncfunc_to_llvm-transformapplyfunctollvmconversionpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_conversion_patterns.func.func_to_llvm` attr-dict </code></pre><p>Collects patterns that convert Func dialect ops to LLVM dialect ops. These patterns require an “LLVMTypeConverter”.</p><p>Interfaces: <code>ConversionPatternDescriptorOpInterface</code></p><h3 id=transformfunccast_and_call-transformcastandcallop><code>transform.func.cast_and_call</code> (transform::CastAndCallOp) <a class=headline-hash href=#transformfunccast_and_call-transformcastandcallop>¶</a></h3><p><em>Casts values to the signature of a function and replaces them with a call</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.func.cast_and_call` ($function_name^)? ($function^)? ( `(` $inputs^ `)` )? ( `->` $outputs^ )? (`after` $insert_after^):(`before`)? $insertion_point ($conversions^)? attr-dict `:` functional-type(operands, results) </code></pre><p>This transform takes value handles to a set of <code>inputs</code> and <code>outputs</code> and attempts to cast them to the function signature of the attached function op, then builds a call to the function and replaces the users of the outputs. It is the responsibility of the user to ensure that the slice of the program replaced by this operation makes sense, i.e. there is no verification that the inputs to this operation have any relation to the outputs outside of basic dominance requirements needed for the call.</p><p>The casting materialization functions are specified in the graph region of this op. They must implement the <code>TypeConverterBuilderOpInterface</code>. The order of ops within the region is irrelevant.</p><p>The target function can be specified by a symbol name or by a handle to the operation.</p><p>This transform only reads the operand handles and only replaces the users of the outputs with the results of the call. No handles are consumed and no operations are removed. Users are expected to run cleanup separately if desired.</p><p>Warning: The replacement of the uses of the outputs could invalidate certain restricted value handle types (e.g. <code>transform.block_arg</code> if it existed, by replacing the use with something not coming from a block argument). The value will still exist in such cases but wouldn’t verify against the type. See the discussion here for more information: <a href=https://github.com/llvm/llvm-project/pull/78398#discussion_r1455070087>https://github.com/llvm/llvm-project/pull/78398#discussion_r1455070087</a></p><p>This transform will emit a silenceable failure if:</p><ul><li>The set of outputs isn’t unique</li><li>The handle for the insertion point does not include exactly one operation</li><li>The insertion point op does not dominate any of the output users</li><li>The insertion point op is not dominated by any of the inputs</li><li>The function signature does not match the number of inputs/outputs</li></ul><p>This transform will emit a definite failure if it fails to resolve the target function, or if it fails to materialize the conversion casts of either the inputs to the function argument types, or the call results to the output types.</p><p>Traits: <code>AttrSizedOperandSegments</code>, <code>HasOnlyGraphRegion</code>, <code>NoTerminator</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>SingleBlock</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>RegionKindInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-28>Attributes: <a class=headline-hash href=#attributes-28>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>insert_after</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>function_name</code></td><td>::mlir::SymbolRefAttr</td><td>symbol reference attribute</td></tr></table><h4 id=operands-40>Operands: <a class=headline-hash href=#operands-40>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>insertion_point</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>inputs</code></td><td>TransformValueHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>outputs</code></td><td>TransformValueHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>function</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-24>Results: <a class=headline-hash href=#results-24>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h2 id=gpu-transform-operations>GPU Transform Operations <a class=headline-hash href=#gpu-transform-operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/GPU/TransformOps/GPUTransformOps.td>source</a></p><h3 id=transformapply_patternsgpugpu_rewrite_patterns-transformapplygpurewritepatternsop><code>transform.apply_patterns.gpu.gpu_rewrite_patterns</code> (transform::ApplyGPURewritePatternsOp) <a class=headline-hash href=#transformapply_patternsgpugpu_rewrite_patterns-transformapplygpurewritepatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.gpu.gpu_rewrite_patterns` attr-dict </code></pre><p>Collects GPU rewrite patterns comprising:</p><ol><li>GpuAllReduceRewrite patterns</li><li>GpuGlobalIdRewriter patterns</li><li>GpuShuffleRewriter patterns</li></ol><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_conversion_patternsgpugpu_subgroup_reduce_to_nvvm-transformapplygpusubgroupreducetonvvmconversionpatternsop><code>transform.apply_conversion_patterns.gpu.gpu_subgroup_reduce_to_nvvm</code> (transform::ApplyGPUSubgroupReduceToNVVMConversionPatternsOp) <a class=headline-hash href=#transformapply_conversion_patternsgpugpu_subgroup_reduce_to_nvvm-transformapplygpusubgroupreducetonvvmconversionpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_conversion_patterns.gpu.gpu_subgroup_reduce_to_nvvm` attr-dict </code></pre><p>Collects patterns that convert GPU dialect ops related to wmma ops to NVVM dialect ops. These patterns require an “LLVMTypeConverter”.</p><p>Interfaces: <code>ConversionPatternDescriptorOpInterface</code></p><h3 id=transformapply_conversion_patternsgpugpu_to_nvvm-transformapplygputonvvmconversionpatternsop><code>transform.apply_conversion_patterns.gpu.gpu_to_nvvm</code> (transform::ApplyGPUToNVVMConversionPatternsOp) <a class=headline-hash href=#transformapply_conversion_patternsgpugpu_to_nvvm-transformapplygputonvvmconversionpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_conversion_patterns.gpu.gpu_to_nvvm` attr-dict </code></pre><p>Collects patterns that convert GPU dialect ops to NVVM dialect ops. These patterns require an “LLVMTypeConverter”.</p><p>Interfaces: <code>ConversionPatternDescriptorOpInterface</code></p><h3 id=transformapply_conversion_patternsgpugpu_wmma_to_nvvm-transformapplygpuwwmatonvvmconversionpatternsop><code>transform.apply_conversion_patterns.gpu.gpu_wmma_to_nvvm</code> (transform::ApplyGPUWwmaToNVVMConversionPatternsOp) <a class=headline-hash href=#transformapply_conversion_patternsgpugpu_wmma_to_nvvm-transformapplygpuwwmatonvvmconversionpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_conversion_patterns.gpu.gpu_wmma_to_nvvm` attr-dict </code></pre><p>Collects patterns that convert GPU dialect ops related to wmma ops to NVVM dialect ops. These patterns require an “LLVMTypeConverter”.</p><p>Interfaces: <code>ConversionPatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsgpuunroll_vectors_subgroup_mma-transformapplyunrollvectorssubgroupmmaop><code>transform.apply_patterns.gpu.unroll_vectors_subgroup_mma</code> (transform::ApplyUnrollVectorsSubgroupMmaOp) <a class=headline-hash href=#transformapply_patternsgpuunroll_vectors_subgroup_mma-transformapplyunrollvectorssubgroupmmaop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.gpu.unroll_vectors_subgroup_mma` `[` $m `,` $n `,` $k `]` attr-dict </code></pre><p>Unrolls contractions to the target <code>m</code>, <code>n</code>, and <code>k</code> native vector size, along with other vector operations based on expected usage. <code>transfer_read</code> ops unroll based on the extract slice shape introduced by unrolling the contractions, while elementwise and <code>transfer_write</code> ops unroll to the shape of the C matrix (<code>m x n</code>).</p><p>This operation applies to pure vector operations and should be applied before lowering to subgroup_mma ops.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h4 id=attributes-29>Attributes: <a class=headline-hash href=#attributes-29>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>m</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>n</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>k</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr></table><h3 id=transformapply_patternsgpueliminate_barriers-transformeliminatebarriersop><code>transform.apply_patterns.gpu.eliminate_barriers</code> (transform::EliminateBarriersOp) <a class=headline-hash href=#transformapply_patternsgpueliminate_barriers-transformeliminatebarriersop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.gpu.eliminate_barriers` attr-dict </code></pre><p>Removes unnecessary GPU barriers from the function. If a barrier does not enforce any conflicting pair of memory effects, including a pair that is enforced by another barrier, it is unnecessary and can be removed.</p><p>The approach is based on “High-Performance GPU-to-CPU Transpilation and Optimization via High-Level Parallel Constructs” by Moses, Ivanov, Domke, Endo, Doerfert, and Zinenko in PPoPP 2023. Specifically, it analyzes the memory effects of the operations before and after the given barrier and checks if the barrier enforces any of the memory effect-induced dependencies that aren’t already enforced by another barrier.</p><p>For example, in the following code</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl> store <span class=nv>%A</span> </span></span><span class=line><span class=cl> barrier <span class=c>// enforces load-after-store </span></span></span><span class=line><span class=cl><span class=c></span> load <span class=nv>%A</span> </span></span><span class=line><span class=cl> barrier <span class=c>// load-after-store already enforced by the previous barrier </span></span></span><span class=line><span class=cl><span class=c></span> load <span class=nv>%A</span> </span></span></code></pre></div><p>the second barrier can be removed.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformgpumap_forall_to_blocks-transformmapforalltoblocks><code>transform.gpu.map_forall_to_blocks</code> (transform::MapForallToBlocks) <a class=headline-hash href=#transformgpumap_forall_to_blocks-transformmapforalltoblocks>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.gpu.map_forall_to_blocks` $target (`generate_gpu_launch` $generate_gpu_launch^)? (`grid_dims` `=` $grid_dims^)? attr-dict `:` functional-type($target, $result) </code></pre><p>Target the gpu_launch op and rewrite the top level <code>scf.forall</code> to distributed gpu.block_id attribute. If <code>generate_gpu_launch</code> attribute is set, then first generates <code>gpu_launch</code> and moves the top level <code>scf.forall</code> inside.</p><p>The operation searches top level <code>scf.forall</code> ops under <code>gpu_launch</code> and maps each such op to GPU blocks. Mapping is one-to-one and the induction variables of <code>scf.forall</code> are rewritten to gpu.block_id according to the <code>thread_dim_mapping</code> attribute.</p><p>Dynamic, <code>scf.forall</code> trip counts are currently not supported. Dynamic block dim sizes are currently not supported.</p><p>Only <strong>bufferized</strong> scf.forall are currently supported. Only scf.forall distributed to <strong>at most 3 dimensions</strong> are currently supported.</p><p>The operation alters the block size of the given gpu_launch using the grid_dims argument.</p><h4 id=return-modes-5>Return modes: <a class=headline-hash href=#return-modes-5>¶</a></h4><p>This operation ignores non-gpu_launch ops and drops them in the return.</p><p>If any scf.forall with tensors is found, the transform definitely fails.</p><p>If all the scf.forall operations contained within the LaunchOp referred to by the <code>target</code> PDLOperation lower to GPU properly, the transform succeeds. Otherwise the transform definitely fails.</p><p>The returned handle points to the same LaunchOp operand, consuming it and producing a new SSA value to satisfy chaining and linearity of the IR properties.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-30>Attributes: <a class=headline-hash href=#attributes-30>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>grid_dims</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>generate_gpu_launch</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-41>Operands: <a class=headline-hash href=#operands-41>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-25>Results: <a class=headline-hash href=#results-25>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformgpumap_nested_forall_to_threads-transformmapnestedforalltothreads><code>transform.gpu.map_nested_forall_to_threads</code> (transform::MapNestedForallToThreads) <a class=headline-hash href=#transformgpumap_nested_forall_to_threads-transformmapnestedforalltothreads>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.gpu.map_nested_forall_to_threads` $target `block_dims` `=` $block_dims (`sync_after_distribute` `=` $sync_after_distribute^)? (`warp_size` `=` $warp_size^)? attr-dict `:` functional-type($target, $result) </code></pre><p>Target the <code>gpu.launch op</code> and rewrite all <code>scf.forall</code> nested in it to distributed <code>gpu.thread_id</code> attribute.</p><p>The operation searches for <code>scf.forall</code> ops nested under <code>target</code> and maps each such op to GPU threads.</p><p><code>scf.forall</code> induction variables are rewritten to <code>gpu.thread_id</code> according to the <code>mapping</code> attribute.</p><p>Different types of mappings attributes are supported:</p><ul><li>the block_dims is a list of integers that specifies the number of threads in each dimension. This is a mandatory attribute that is used to constrain the number of threads in each dimension. If an <code>scf.forall</code> op is mapped to fewer threads, predication occurs.</li><li>the warp_dims is a list of integers that specifies the number of warps in each dimension. This is an optional attribute that is used to constrain the number of warps in each dimension. When present, this attribute must be specified in a way that is compatible with the block_dims attribute. If an <code>scf.forall</code> op is mapped to fewer warps, predication occurs.</li></ul><p>Dynamic <code>scf.forall</code> trip counts are currently not supported. Dynamic block dim sizes are currently not supported.</p><p>Only <strong>bufferized</strong> <code>scf.forall</code> are currently supported. Only <code>scf.forall</code> distributed to <strong>at most 3 dimensions</strong> are currently supported.</p><p>The <code>sync_after_distribute</code>attribute controls whether a <code>gpu.barrier</code> is inserted after each scf.forall op. At this time, this is an all or nothing choice. This will need to be tightened in the future.</p><p>The operation alters the block size of the given gpu_launch using the mandatory block_dims argument.</p><h4 id=return-modes-6>Return modes: <a class=headline-hash href=#return-modes-6>¶</a></h4><p>This operation ignores non-gpu_launch ops and drops them in the return.</p><p>If any scf.forall with tensors is found, the transform definitely fails.</p><p>If all the scf.forall operations with gpu.thread mapping contained within the LaunchOp referred to by the <code>target</code> PDLOperation lower to GPU properly, the transform succeeds. Otherwise the transform definitely fails.</p><p>scf.forall operations with mappings other than gpu.thread are ignored.</p><p>The returned handle points to the same LaunchOp operand, consuming it and producing a new SSA value to satisfy chaining and linearity of the IR properties.</p><h4 id=example>Example: <a class=headline-hash href=#example>¶</a></h4><pre tabindex=0><code>gpu.launch blocks(%bx, %by, %bz) in (%x = %0, %y = %1, %z = %2) threads(%tx, %ty, %tz) in (%tx = %3, %ty = %4, %tz = %5) { scf.forall (%i, %j) in (7, 9) { ... // body 1 } {mapping = [#gpu.thread<x>, #gpu.thread<y>, #gpu.thread<z>]} scf.forall (%i) in (12) { ... // body 2 } {mapping = [#gpu.thread<x>]} gpu.terminator } </code></pre><p>is translated to:</p><pre tabindex=0><code>%bdimX = arith.constant 12 : index %bdimY = arith.constant 9 : index gpu.launch blocks(%bx, %by, %bz) in (%x = %0, %y = %1, %z = %2) threads(%tx, %ty, %tz) in (%tx = %bdimX, %ty = %bdimY, %tz = %5) { if (threadIdx.x < 9 && threadIdx.y < 7) { ... // body 1 } gpu.barrier if (threadIdx.y < 1) { ... // body 2 } gpu.barrier gpu.terminator } </code></pre><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-31>Attributes: <a class=headline-hash href=#attributes-31>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>block_dims</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>sync_after_distribute</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr><tr><td><code>warp_size</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr></table><h4 id=operands-42>Operands: <a class=headline-hash href=#operands-42>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-26>Results: <a class=headline-hash href=#results-26>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h2 id=loop-extension-transform-operations>Loop (extension) Transform Operations <a class=headline-hash href=#loop-extension-transform-operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/Transform/LoopExtension/LoopExtensionOps.td>source</a></p><h3 id=transformloophoist_loop_invariant_subsets-transformhoistloopinvariantsubsetsop><code>transform.loop.hoist_loop_invariant_subsets</code> (transform::HoistLoopInvariantSubsetsOp) <a class=headline-hash href=#transformloophoist_loop_invariant_subsets-transformhoistloopinvariantsubsetsop>¶</a></h3><p><em>Hoist loop invariant subset ops</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.loop.hoist_loop_invariant_subsets` $target attr-dict `:` type($target) </code></pre><p>This transform hoists loop-invariant subset ops out of the targeted loop-like op. It looks for matching subset extraction/insertion op pairs and hoists them. The loop body operates on a newly introduced region iter_arg.</p><p>Subset ops are hoisted only from the targeted op. If subset ops should be hoisted from an entire loop nest, this transformation must be applied to each loop-like op of the loop nest, starting with the innermost loop and ending with the outermost loop.</p><p>Example:</p><pre tabindex=0><code>%r = scf.for ... iter_args(%t = %a) -> (tensor<?xf32>) { %0 = tensor.extract_slice %t[0][5][1] : tensor<?xf32> to tensor<5xf32> %1 = "test.foo"(%0) : (tensor<5xf32>) -> (tensor<5xf32>) %2 = tensor.insert_slice %1 into %t[0][5][1] : tensor<5xf32> into tensor<?xf32> scf.yield %2 : tensor<?xf32> } </code></pre><p>Is transformed to:</p><pre tabindex=0><code>%0 = tensor.extract_slice %a[0][5][1] : tensor<?xf32> to tensor<5xf32> %new_loop:2 = scf.for ... iter_args(%t = %a, %h = %0) -> (tensor<?xf32>) { %1 = "test.foo"(%h) : (tensor<5xf32>) -> (tensor<5xf32>) scf.yield %t, %2 : tensor<?xf32>, tensor<5xf32> } %r = tensor.insert_slice %new_loop#1 into %new_loop#0 : tensor<5xf32> into tensor<?xf32> </code></pre><p>Subset ops are hoisted only if there are no conflicting subset ops. E.g., if there were a second overlapping extraction in the above example, no ops could be hoisted safely.</p><p>This transform reads the target handle and modifies the payload. This transform does not invalidate any handles, but loop-like ops are replaced with new loop-like ops when a subset op is hoisted. The transform rewriter updates all handles accordingly.</p><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-43>Operands: <a class=headline-hash href=#operands-43>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h2 id=loop-scf-transform-operations>Loop (SCF) Transform Operations <a class=headline-hash href=#loop-scf-transform-operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/SCF/TransformOps/SCFTransformOps.td>source</a></p><h3 id=transformapply_patternsscffor_loop_canonicalization-transformapplyforloopcanonicalizationpatternsop><code>transform.apply_patterns.scf.for_loop_canonicalization</code> (transform::ApplyForLoopCanonicalizationPatternsOp) <a class=headline-hash href=#transformapply_patternsscffor_loop_canonicalization-transformapplyforloopcanonicalizationpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.scf.for_loop_canonicalization` attr-dict </code></pre><p>Collects patterns for canonicalizing operations inside SCF loop bodies. At the moment, only affine.min/max computations with iteration variables, loop bounds and loop steps are canonicalized.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_conversion_patternsscfstructural_conversions-transformapplyscfstructuralconversionpatternsop><code>transform.apply_conversion_patterns.scf.structural_conversions</code> (transform::ApplySCFStructuralConversionPatternsOp) <a class=headline-hash href=#transformapply_conversion_patternsscfstructural_conversions-transformapplyscfstructuralconversionpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_conversion_patterns.scf.structural_conversions` attr-dict </code></pre><p>Collects patterns for performing structural conversions of SCF operations.</p><p>Interfaces: <code>ConversionPatternDescriptorOpInterface</code></p><h3 id=transformapply_conversion_patternsscfscf_to_control_flow-transformapplyscftocontrolflowpatternsop><code>transform.apply_conversion_patterns.scf.scf_to_control_flow</code> (transform::ApplySCFToControlFlowPatternsOp) <a class=headline-hash href=#transformapply_conversion_patternsscfscf_to_control_flow-transformapplyscftocontrolflowpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_conversion_patterns.scf.scf_to_control_flow` attr-dict </code></pre><p>Collects patterns that lower structured control flow ops to unstructured control flow.</p><p>Interfaces: <code>ConversionPatternDescriptorOpInterface</code></p><h3 id=transformloopforall_to_for-transformforalltoforop><code>transform.loop.forall_to_for</code> (transform::ForallToForOp) <a class=headline-hash href=#transformloopforall_to_for-transformforalltoforop>¶</a></h3><p><em>Converts scf.forall into a nest of scf.for operations</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.loop.forall_to_for` $target attr-dict `:` functional-type(operands, results) </code></pre><p>Converts the <code>scf.forall</code> operation pointed to by the given handle into a set of nested <code>scf.for</code> operations. Each new operation corresponds to one induction variable of the original “multifor” loop.</p><p>The operand handle must be associated with exactly one payload operation.</p><p>Loops with shared outputs are currently not supported.</p><h4 id=return-modes-7>Return Modes <a class=headline-hash href=#return-modes-7>¶</a></h4><p>Consumes the operand handle. Produces a silenceable failure if the operand is not associated with a single <code>scf.forall</code> payload operation. Returns as many handles as the given <code>forall</code> op has induction variables that are associated with the generated <code>scf.for</code> loops. Produces a silenceable failure if another number of resulting handles is requested.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-44>Operands: <a class=headline-hash href=#operands-44>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-27>Results: <a class=headline-hash href=#results-27>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>variadic of TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformloopforall_to_parallel-transformforalltoparallelop><code>transform.loop.forall_to_parallel</code> (transform::ForallToParallelOp) <a class=headline-hash href=#transformloopforall_to_parallel-transformforalltoparallelop>¶</a></h3><p><em>Converts scf.forall into a nest of scf.for operations</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.loop.forall_to_parallel` $target attr-dict `:` functional-type(operands, results) </code></pre><p>Converts the <code>scf.forall</code> operation pointed to by the given handle into an <code>scf.parallel</code> operation.</p><p>The operand handle must be associated with exactly one payload operation.</p><p>Loops with outputs are not supported.</p><h4 id=return-modes-8>Return Modes <a class=headline-hash href=#return-modes-8>¶</a></h4><p>Consumes the operand handle. Produces a silenceable failure if the operand is not associated with a single <code>scf.forall</code> payload operation. Returns a handle to the new <code>scf.parallel</code> operation. Produces a silenceable failure if another number of resulting handles is requested.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-45>Operands: <a class=headline-hash href=#operands-45>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-28>Results: <a class=headline-hash href=#results-28>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>variadic of TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformloopcoalesce-transformloopcoalesceop><code>transform.loop.coalesce</code> (transform::LoopCoalesceOp) <a class=headline-hash href=#transformloopcoalesce-transformloopcoalesceop>¶</a></h3><p><em>Coalesces the perfect loop nest enclosed by a given loop</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.loop.coalesce` $target attr-dict `:` functional-type($target, $transformed) </code></pre><p>Given a perfect loop nest identified by the outermost loop, perform loop coalescing in a bottom-up one-by-one manner.</p><h4 id=return-modes-9>Return modes <a class=headline-hash href=#return-modes-9>¶</a></h4><p>The return handle points to the coalesced loop if coalescing happens, or the given input loop if coalescing does not happen.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-46>Operands: <a class=headline-hash href=#operands-46>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-29>Results: <a class=headline-hash href=#results-29>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformloopfuse_sibling-transformloopfusesiblingop><code>transform.loop.fuse_sibling</code> (transform::LoopFuseSiblingOp) <a class=headline-hash href=#transformloopfuse_sibling-transformloopfusesiblingop>¶</a></h3><p><em>Fuse a loop into another loop, assuming the fusion is legal.</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.loop.fuse_sibling` $target `into` $source attr-dict `:` functional-type(operands, results) </code></pre><p>Fuses the <code>target</code> loop into the <code>source</code> loop assuming they are independent of each other. In the fused loop, the arguments, body and results of <code>target</code> are placed <em>before</em> those of <code>source</code>.</p><p>For fusion of two <code>scf.for</code> loops, the bounds and step size must match. For fusion of two <code>scf.forall</code> loops, the bounds and the mapping must match. Otherwise a silencable failure is produced.</p><p>The <code>target</code> and <code>source</code> handles must refer to exactly one operation, otherwise a definite failure is produced. It is the responsibility of the user to ensure that the <code>target</code> and <code>source</code> loops are independent of each other – this op will only perform rudimentary legality checks.</p><h4 id=return-modes-10>Return modes <a class=headline-hash href=#return-modes-10>¶</a></h4><p>This operation consumes the <code>target</code> and <code>source</code> handles and produces the <code>fused_loop</code> handle, which points to the fused loop.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-47>Operands: <a class=headline-hash href=#operands-47>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>source</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-30>Results: <a class=headline-hash href=#results-30>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>fused_loop</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformloopoutline-transformloopoutlineop><code>transform.loop.outline</code> (transform::LoopOutlineOp) <a class=headline-hash href=#transformloopoutline-transformloopoutlineop>¶</a></h3><p><em>Outlines a loop into a named function</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.loop.outline` $target attr-dict `:` functional-type(operands, results) </code></pre><p>Moves the loop into a separate function with the specified name and replaces the loop in the Payload IR with a call to that function. Takes care of forwarding values that are used in the loop as function arguments. If the operand is associated with more than one loop, each loop will be outlined into a separate function. The provided name is used as a <em>base</em> for forming actual function names following <code>SymbolTable</code> auto-renaming scheme to avoid duplicate symbols. Expects that all ops in the Payload IR have a <code>SymbolTable</code> ancestor (typically true because of the top-level module).</p><h4 id=return-modes-11>Return Modes <a class=headline-hash href=#return-modes-11>¶</a></h4><p>Returns a handle to the list of outlined functions and a handle to the corresponding function call operations in the same order as the operand handle.</p><p>Produces a definite failure if outlining failed for any of the targets.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-32>Attributes: <a class=headline-hash href=#attributes-32>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>func_name</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr></table><h4 id=operands-48>Operands: <a class=headline-hash href=#operands-48>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-31>Results: <a class=headline-hash href=#results-31>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>function</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>call</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformlooppeel-transformlooppeelop><code>transform.loop.peel</code> (transform::LoopPeelOp) <a class=headline-hash href=#transformlooppeel-transformlooppeelop>¶</a></h3><p><em>Peels the first or last iteration of the loop</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.loop.peel` $target attr-dict `:` functional-type(operands, results) </code></pre><p>Rewrite the given loop with a main loop and a partial (first or last) loop. When the <code>peelFront</code> option is set to true, the first iteration is peeled off. Otherwise, updates the given loop so that its step evenly divides its range and puts the remaining iteration into a separate loop or a conditional.</p><p>In the absence of sufficient static information, this op may peel a loop, even if the step always divides the range evenly at runtime.</p><h4 id=return-modes-12>Return modes <a class=headline-hash href=#return-modes-12>¶</a></h4><p>This operation ignores non-scf::ForOp ops and drops them in the return. The op returns two loops, the peeled loop which has trip count divisible by the step, and the remainder loop.</p><p>When <code>peelFront</code> is true, the first result (remainder loop) executes all but the first iteration of the target loop. The second result (peeled loop) corresponds to the first iteration of the loop which can be canonicalized away in the following optimizations.</p><p>When <code>peelFront</code> is false, the first result (peeled loop) is the portion of the target loop with the highest upper bound that is divisible by the step. The second result (remainder loop) contains the remaining iterations.</p><p>Note that even though the Payload IR modification may be performed in-place, this operation consumes the operand handle and produces a new one.</p><h4 id=return-modes-13>Return Modes <a class=headline-hash href=#return-modes-13>¶</a></h4><p>Produces a definite failure if peeling fails.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-33>Attributes: <a class=headline-hash href=#attributes-33>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>peel_front</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr><tr><td><code>fail_if_already_divisible</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr></table><h4 id=operands-49>Operands: <a class=headline-hash href=#operands-49>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>Transform IR handle to scf.for operations</td></tr></tbody></table><h4 id=results-32>Results: <a class=headline-hash href=#results-32>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>peeled_loop</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>remainder_loop</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformlooppipeline-transformlooppipelineop><code>transform.loop.pipeline</code> (transform::LoopPipelineOp) <a class=headline-hash href=#transformlooppipeline-transformlooppipelineop>¶</a></h3><p><em>Applies software pipelining to the loop</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.loop.pipeline` $target attr-dict `:` functional-type(operands, results) </code></pre><p>Transforms the given loops one by one to achieve software pipelining for each of them. That is, performs some amount of reads from memory before the loop rather than inside the loop, the same amount of writes into memory after the loop, and updates each iteration to read the data for a following iteration rather than the current one.</p><p>The amount is specified by the attributes.</p><p>The values read and about to be stored are transferred as loop iteration arguments. Currently supports memref and vector transfer operations as memory reads/writes.</p><h4 id=return-modes-14>Return modes <a class=headline-hash href=#return-modes-14>¶</a></h4><p>This operation ignores non-scf::For ops and drops them in the return. If all the operations referred to by the <code>target</code> PDLOperation pipeline properly, the transform succeeds. Otherwise the transform produces a silenceable failure. The return handle points to only the subset of successfully produced pipelined loops, which can be empty.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-34>Attributes: <a class=headline-hash href=#attributes-34>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>iteration_interval</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>read_latency</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr></table><h4 id=operands-50>Operands: <a class=headline-hash href=#operands-50>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>Transform IR handle to scf.for operations</td></tr></tbody></table><h4 id=results-33>Results: <a class=headline-hash href=#results-33>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformlooppromote_if_one_iteration-transformlooppromoteifoneiterationop><code>transform.loop.promote_if_one_iteration</code> (transform::LoopPromoteIfOneIterationOp) <a class=headline-hash href=#transformlooppromote_if_one_iteration-transformlooppromoteifoneiterationop>¶</a></h3><p><em>Promote loop if it has one iteration</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.loop.promote_if_one_iteration` $target attr-dict `:` type($target) </code></pre><p>Promotes the given target loop op if it has a single iteration. I.e., the loop op is removed and only the body remains.</p><h4 id=return-modes-15>Return modes <a class=headline-hash href=#return-modes-15>¶</a></h4><p>This transform fails if the target is mapped to ops that are loops. Ops are considered loops if they implement the <code>LoopLikeOpInterface</code>. Otherwise, this transform always succeeds. The transform consumes the target handle and modifies the payload.</p><p>Traits: <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-51>Operands: <a class=headline-hash href=#operands-51>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformloopunroll_and_jam-transformloopunrollandjamop><code>transform.loop.unroll_and_jam</code> (transform::LoopUnrollAndJamOp) <a class=headline-hash href=#transformloopunroll_and_jam-transformloopunrollandjamop>¶</a></h3><p><em>Unrolls and jam the given loop with the given unroll factor</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.loop.unroll_and_jam` $target attr-dict `:` type($target) </code></pre><p>Unrolls & jams each loop associated with the given handle to have up to the given number of loop body copies per iteration. If the unroll factor is larger than the loop trip count, the latter is used as the unroll factor instead.</p><h4 id=return-modes-16>Return modes <a class=headline-hash href=#return-modes-16>¶</a></h4><p>This operation ignores non-<code>scf.for</code>, non-<code>affine.for</code> ops and drops them in the return. If all the operations referred to by the <code>target</code> operand unroll properly, the transform succeeds. Otherwise the transform produces a silenceable failure.</p><p>Does not return handles as the operation may result in the loop being removed after a full unrolling.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-35>Attributes: <a class=headline-hash href=#attributes-35>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>factor</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute whose value is positive</td></tr></table><h4 id=operands-52>Operands: <a class=headline-hash href=#operands-52>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformloopunroll-transformloopunrollop><code>transform.loop.unroll</code> (transform::LoopUnrollOp) <a class=headline-hash href=#transformloopunroll-transformloopunrollop>¶</a></h3><p><em>Unrolls the given loop with the given unroll factor</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.loop.unroll` $target attr-dict `:` type($target) </code></pre><p>Unrolls each loop associated with the given handle to have up to the given number of loop body copies per iteration. If the unroll factor is larger than the loop trip count, the latter is used as the unroll factor instead.</p><h4 id=return-modes-17>Return modes <a class=headline-hash href=#return-modes-17>¶</a></h4><p>This operation ignores non-<code>scf.for</code>, non-<code>affine.for</code> ops and drops them in the return. If all the operations referred to by the <code>target</code> operand unroll properly, the transform succeeds. Otherwise the transform produces a silenceable failure.</p><p>Does not return handles as the operation may result in the loop being removed after a full unrolling.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-36>Attributes: <a class=headline-hash href=#attributes-36>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>factor</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute whose value is positive</td></tr></table><h4 id=operands-53>Operands: <a class=headline-hash href=#operands-53>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformscftake_assumed_branch-transformtakeassumedbranchop><code>transform.scf.take_assumed_branch</code> (transform::TakeAssumedBranchOp) <a class=headline-hash href=#transformscftake_assumed_branch-transformtakeassumedbranchop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.scf.take_assumed_branch` $target (`take_else_branch` $take_else_branch^)? attr-dict `:` functional-type(operands, results) </code></pre><p>Given an scf.if conditional, inject user-defined information that it is always safe to execute only the if or else branch.</p><p>This is achieved by just replacing the scf.if by the content of one of its branches.</p><p>This is particularly useful for user-controlled rewriting of conditionals that exist solely to guard against out-of-bounds behavior.</p><p>At the moment, no assume or assert operation is emitted as it is not always desirable. In the future, this may be controlled by a dedicated attribute.</p><h4 id=return-modes-18>Return modes <a class=headline-hash href=#return-modes-18>¶</a></h4><p>The transform only consumes its operand and does not produce any result. The transform definitely fails if <code>take_else_branch</code> is specified and the <code>else</code> region is empty.</p><p>Traits: <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-37>Attributes: <a class=headline-hash href=#attributes-37>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>take_else_branch</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-54>Operands: <a class=headline-hash href=#operands-54>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h2 id=memref-transform-operations>MemRef Transform Operations <a class=headline-hash href=#memref-transform-operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/MemRef/TransformOps/MemRefTransformOps.td>source</a></p><h3 id=transformapply_patternsmemrefalloc_to_alloca-transformapplyalloctoallocaop><code>transform.apply_patterns.memref.alloc_to_alloca</code> (transform::ApplyAllocToAllocaOp) <a class=headline-hash href=#transformapply_patternsmemrefalloc_to_alloca-transformapplyalloctoallocaop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.memref.alloc_to_alloca` (`size_limit` `(` $size_limit^ `)`)? attr-dict </code></pre><p>Collects patterns to rewrite scoped dynamic allocation (<code>alloc</code>/<code>dealloc</code> pairs) into automatic allocation (<code>alloca</code>) in the same scope, for memrefs of static shape.</p><p>The <code>size_limit</code> attribute controls the maximum allocated memory (in bytes, subject to data layout) for which the pattern applies.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h4 id=attributes-38>Attributes: <a class=headline-hash href=#attributes-38>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>size_limit</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr></table><h3 id=transformapply_patternsmemrefexpand_ops-transformapplyexpandopspatternsop><code>transform.apply_patterns.memref.expand_ops</code> (transform::ApplyExpandOpsPatternsOp) <a class=headline-hash href=#transformapply_patternsmemrefexpand_ops-transformapplyexpandopspatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.memref.expand_ops` attr-dict </code></pre><p>Collects patterns to rewrite ops within the memref dialect.</p><ul><li>Converts <code>atomic_rmw</code> that cannot be lowered to a simple atomic op with AtomicRMWOpLowering pattern, e.g. with “minf” or “maxf” attributes, to <code>memref.generic_atomic_rmw</code> with the expanded code.</li><li>Converts <code>memref.reshape</code> that has a target shape of a statically-known size to <code>memref.reinterpret_cast</code>.</li></ul><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsmemrefexpand_strided_metadata-transformapplyexpandstridedmetadatapatternsop><code>transform.apply_patterns.memref.expand_strided_metadata</code> (transform::ApplyExpandStridedMetadataPatternsOp) <a class=headline-hash href=#transformapply_patternsmemrefexpand_strided_metadata-transformapplyexpandstridedmetadatapatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.memref.expand_strided_metadata` attr-dict </code></pre><p>Collects patterns for expanding memref operations that modify the metadata (sizes, offset, strides) of a memref into easier to analyze constructs.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsmemrefextract_address_computations-transformapplyextractaddresscomputationspatternsop><code>transform.apply_patterns.memref.extract_address_computations</code> (transform::ApplyExtractAddressComputationsPatternsOp) <a class=headline-hash href=#transformapply_patternsmemrefextract_address_computations-transformapplyextractaddresscomputationspatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.memref.extract_address_computations` attr-dict </code></pre><p>Collects patterns for extracting address computations from operations with memory accesses such that these memory accesses use only a base pointer.</p><p>For instance,</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=kt>memref</span><span class=p>.</span>load <span class=nv>%base</span><span class=p>[</span><span class=nv>%off0</span><span class=p>,</span> <span class=p>...]</span> </span></span></code></pre></div><p>Will be rewritten in:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%new_base</span> <span class=p>=</span> <span class=kt>memref</span><span class=p>.</span>subview <span class=nv>%base</span><span class=p>[</span><span class=nv>%off0</span><span class=p>,...][</span><span class=m>1</span><span class=p>,...][</span><span class=m>1</span><span class=p>,...]</span> </span></span><span class=line><span class=cl><span class=kt>memref</span><span class=p>.</span>load <span class=nv>%new_base</span><span class=p>[</span><span class=nv>%c0</span><span class=p>,...]</span> </span></span></code></pre></div><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsmemreffold_memref_alias_ops-transformapplyfoldmemrefaliasopspatternsop><code>transform.apply_patterns.memref.fold_memref_alias_ops</code> (transform::ApplyFoldMemrefAliasOpsPatternsOp) <a class=headline-hash href=#transformapply_patternsmemreffold_memref_alias_ops-transformapplyfoldmemrefaliasopspatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.memref.fold_memref_alias_ops` attr-dict </code></pre><p>Collects patterns for folding memref aliasing ops (memref.subview) into consumer load/store ops (affine.load, memref.load, nvgpu.ldmatrix, vector.load, vector.transfer_read, affine.store, memref.store, etc.) and other ops (e.g., memref.subview).</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsmemrefresolve_ranked_shaped_type_result_dims-transformapplyresolverankedshapedtyperesultdimspatternsop><code>transform.apply_patterns.memref.resolve_ranked_shaped_type_result_dims</code> (transform::ApplyResolveRankedShapedTypeResultDimsPatternsOp) <a class=headline-hash href=#transformapply_patternsmemrefresolve_ranked_shaped_type_result_dims-transformapplyresolverankedshapedtyperesultdimspatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.memref.resolve_ranked_shaped_type_result_dims` attr-dict </code></pre><p>Collects patterns that resolve <code>memref.dim</code> operations with values that are defined by operations that implement the <code>ReifyRankedShapedTypeOpInterface</code>, in terms of shapes of its input operands.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformmemrefalloca_to_global-transformmemrefallocatoglobalop><code>transform.memref.alloca_to_global</code> (transform::MemRefAllocaToGlobalOp) <a class=headline-hash href=#transformmemrefalloca_to_global-transformmemrefallocatoglobalop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.memref.alloca_to_global` $alloca attr-dict `:` functional-type(operands, results) </code></pre><p>Inserts a new <code>memref.global</code> for each provided <code>memref.alloca</code> into the nearest symbol table (e.g., a <code>builtin.module</code>) and replaces it with a <code>memref.get_global</code>. This is useful, for example, for allocations that should reside in the shared memory of a GPU, which have to be declared as globals.</p><h4 id=example-1>Example <a class=headline-hash href=#example-1>¶</a></h4><p>Consider the following transform op:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%get_global</span><span class=p>,</span> <span class=nv>%global</span> <span class=p>=</span> </span></span><span class=line><span class=cl> transform<span class=p>.</span><span class=kt>memref</span><span class=p>.</span>alloca_to_global <span class=nv>%alloca</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>(!</span>transform<span class=p>.</span>op<span class=p><</span><span class=s>"memref.alloca"</span><span class=p>>)</span> </span></span><span class=line><span class=cl> <span class=p>-></span> <span class=p>(!</span>transform<span class=p>.</span>any_op<span class=p>,</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>)</span> </span></span></code></pre></div><p>and the following input payload:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>module <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=kt>func</span><span class=p>.</span><span class=kt>func</span> <span class=nf>@func</span><span class=p>()</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nv>%alloca</span> <span class=p>=</span> <span class=kt>memref</span><span class=p>.</span>alloca<span class=p>()</span> <span class=p>:</span> <span class=kt>memref</span><span class=p><</span><span class=m>2x32x</span><span class=k>f32</span><span class=p>></span> </span></span><span class=line><span class=cl> <span class=c>// usages of %alloca... </span></span></span><span class=line><span class=cl><span class=c></span> <span class=p>}</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>then applying the transform op to the payload would result in the following output IR:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>module <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=kt>memref</span><span class=p>.</span>global <span class=s>"private"</span> <span class=nf>@alloc</span> <span class=p>:</span> <span class=kt>memref</span><span class=p><</span><span class=m>2x32x</span><span class=k>f32</span><span class=p>></span> </span></span><span class=line><span class=cl> <span class=kt>func</span><span class=p>.</span><span class=kt>func</span> <span class=nf>@func</span><span class=p>()</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nv>%alloca</span> <span class=p>=</span> <span class=kt>memref</span><span class=p>.</span>get_global <span class=nf>@alloc</span> <span class=p>:</span> <span class=kt>memref</span><span class=p><</span><span class=m>2x32x</span><span class=k>f32</span><span class=p>></span> </span></span><span class=line><span class=cl> <span class=c>// usages of %alloca... </span></span></span><span class=line><span class=cl><span class=c></span> <span class=p>}</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><h4 id=return-modes-19>Return modes <a class=headline-hash href=#return-modes-19>¶</a></h4><p>Succeeds always. The returned handles refer to the <code>memref.get_global</code> and <code>memref.global</code> ops that were inserted by the transformation.</p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-55>Operands: <a class=headline-hash href=#operands-55>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>alloca</code></td><td>Transform IR handle to memref.alloca operations</td></tr></tbody></table><h4 id=results-34>Results: <a class=headline-hash href=#results-34>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>getGlobal</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>global</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformmemreferase_dead_alloc_and_stores-transformmemreferasedeadallocandstoresop><code>transform.memref.erase_dead_alloc_and_stores</code> (transform::MemRefEraseDeadAllocAndStoresOp) <a class=headline-hash href=#transformmemreferase_dead_alloc_and_stores-transformmemreferasedeadallocandstoresop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.memref.erase_dead_alloc_and_stores` $target attr-dict `:` functional-type($target, results) </code></pre><p>This applies memory optimization on memref. In particular it does store to load forwarding, dead store elimination and dead alloc elimination.</p><h4 id=return-modes-20>Return modes <a class=headline-hash href=#return-modes-20>¶</a></h4><p>This operation applies a set of memory optimization on the whole region of the operand.</p><p>The transformation does not consume the target handle. It modifies the payload. Dead allocations, loads and stores are silently dropped from all mappings.</p><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-56>Operands: <a class=headline-hash href=#operands-56>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformmemrefmake_loop_independent-transformmemrefmakeloopindependentop><code>transform.memref.make_loop_independent</code> (transform::MemRefMakeLoopIndependentOp) <a class=headline-hash href=#transformmemrefmake_loop_independent-transformmemrefmakeloopindependentop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.memref.make_loop_independent` $target attr-dict `:` functional-type($target, $transformed) </code></pre><p>Rewrite the targeted ops such that their index-typed operands no longer depend on any loop induction variable of the <code>num_loop</code> enclosing <code>scf.for</code> loops. I.e., compute an upper bound that is independent of any such loop IV for every tensor dimension. The transformed op could then be hoisted from the <code>num_loop</code> enclosing loops. To preserve the original semantics, place a <code>memref.subview</code> inside the loop.</p><p>Currently supported operations are:</p><ul><li>memref.alloca: Replaced with a new memref.alloca with upper bound sizes, followed by a memref.subview.</li></ul><h4 id=return-modes-21>Return modes <a class=headline-hash href=#return-modes-21>¶</a></h4><p>This operation fails if at least one induction variable could not be eliminated. In case the targeted op is already independent of induction variables, this transform succeeds and returns the unmodified target op.</p><p>Otherwise, the returned handle points to a subset of the produced ops:</p><ul><li>memref.alloca: The returned handle points to the memref.subview op.</li></ul><p>This transform op consumes the target handle and produces a result handle.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-39>Attributes: <a class=headline-hash href=#attributes-39>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>num_loops</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr></table><h4 id=operands-57>Operands: <a class=headline-hash href=#operands-57>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-35>Results: <a class=headline-hash href=#results-35>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformmemrefmultibuffer-transformmemrefmultibufferop><code>transform.memref.multibuffer</code> (transform::MemRefMultiBufferOp) <a class=headline-hash href=#transformmemrefmultibuffer-transformmemrefmultibufferop>¶</a></h3><p><em>Multibuffers an allocation</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.memref.multibuffer` $target attr-dict `:` functional-type(operands, results) </code></pre><p>Transformation to do multi-buffering/array expansion to remove dependencies on the temporary allocation between consecutive loop iterations. This transform expands the size of an allocation by a given multiplicative factor and fixes up any users of the multibuffered allocation. If skip analysis is not set the transformation will only apply if it can prove that there is no data being carried across loop iterations.</p><h4 id=return-modes-22>Return modes <a class=headline-hash href=#return-modes-22>¶</a></h4><p>This operation returns the new allocation if multi-buffering succeeds, and failure otherwise.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-40>Attributes: <a class=headline-hash href=#attributes-40>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>factor</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute whose value is positive</td></tr><tr><td><code>skip_analysis</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-58>Operands: <a class=headline-hash href=#operands-58>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>Transform IR handle to memref.alloc operations</td></tr></tbody></table><h4 id=results-36>Results: <a class=headline-hash href=#results-36>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformapply_conversion_patternsmemrefmemref_to_llvm_type_converter-transformmemreftollvmtypeconverterop><code>transform.apply_conversion_patterns.memref.memref_to_llvm_type_converter</code> (transform::MemrefToLLVMTypeConverterOp) <a class=headline-hash href=#transformapply_conversion_patternsmemrefmemref_to_llvm_type_converter-transformmemreftollvmtypeconverterop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_conversion_patterns.memref.memref_to_llvm_type_converter` attr-dict </code></pre><p>This operation provides an “LLVMTypeConverter” that lowers memref types to LLVM types.</p><p>The type converter can be customized as follows:</p><ul><li><code>use_aligned_alloc</code>: Use aligned_alloc in place of malloc for heap allocations.</li><li><code>index_bitwidth</code>: Bitwidth of the index type, “0” indicates the size of a machine word.</li><li><code>use_generic_functions</code>: Use generic allocation and deallocation functions instead of the classic “malloc”, “aligned_alloc” and “free” functions. // TODO: the following two options don’t really make sense for // memref_to_llvm_type_converter specifically. // We should have a single to_llvm_type_converter.</li><li><code>use_bare_ptr_call_conv</code>: Replace FuncOp’s MemRef arguments with bare pointers to the MemRef element types.</li><li><code>data-layout</code>: String description (LLVM format) of the data layout that is expected on the produced module.</li></ul><p>Interfaces: <code>TypeConverterBuilderOpInterface</code></p><h4 id=attributes-41>Attributes: <a class=headline-hash href=#attributes-41>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>use_aligned_alloc</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr><tr><td><code>index_bitwidth</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>use_generic_functions</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr><tr><td><code>use_bare_ptr_call_conv</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr><tr><td><code>data_layout</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr></table><h2 id=pdl-extension-transform-operations>PDL (extension) Transform Operations <a class=headline-hash href=#pdl-extension-transform-operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/Transform/PDLExtension/PDLExtensionOps.td>source</a></p><h3 id=transformpdl_match-transformpdlmatchop><code>transform.pdl_match</code> (transform::PDLMatchOp) <a class=headline-hash href=#transformpdl_match-transformpdlmatchop>¶</a></h3><p><em>Finds ops that match the named PDL pattern</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.pdl_match` $pattern_name `in` $root attr-dict `:` functional-type(operands, results) </code></pre><p>Find Payload IR ops nested within the Payload IR op associated with the operand that match the PDL pattern identified by its name. The pattern is expected to be defined in the closest surrounding <code>WithPDLPatternsOp</code>.</p><p>Produces a Transform IR value associated with the list of Payload IR ops that matched the pattern. The order of results in the list is that of the Operation::walk, clients are advised not to rely on a specific order though. If the operand is associated with multiple Payload IR ops, finds matching ops nested within each of those and produces a single list containing all of the matched ops.</p><p>The transformation is considered successful regardless of whether some Payload IR ops actually matched the pattern and only fails if the pattern could not be looked up or compiled.</p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-42>Attributes: <a class=headline-hash href=#attributes-42>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>pattern_name</code></td><td>::mlir::SymbolRefAttr</td><td>symbol reference attribute</td></tr></table><h4 id=operands-59>Operands: <a class=headline-hash href=#operands-59>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>root</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-37>Results: <a class=headline-hash href=#results-37>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>matched</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformwith_pdl_patterns-transformwithpdlpatternsop><code>transform.with_pdl_patterns</code> (transform::WithPDLPatternsOp) <a class=headline-hash href=#transformwith_pdl_patterns-transformwithpdlpatternsop>¶</a></h3><p><em>Contains PDL patterns available for use in transforms</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.with_pdl_patterns` ($root^ `:` type($root))? attr-dict-with-keyword regions </code></pre><p>This op contains a set of named PDL patterns that are available for the Transform dialect operations to be used for pattern matching. For example, PDLMatchOp can be used to produce a Transform IR value associated with all Payload IR operations that match the pattern as follows:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>transform<span class=p>.</span>with_pdl_patterns <span class=p>{</span> </span></span><span class=line><span class=cl><span class=nl>^bb0</span><span class=p>(</span><span class=nv>%arg0</span><span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>):</span> </span></span><span class=line><span class=cl> pdl<span class=p>.</span>pattern <span class=nf>@my_pattern</span> <span class=p>:</span> benefit<span class=p>(</span><span class=m>1</span><span class=p>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nv>%0</span> <span class=p>=</span> pdl<span class=p>.</span>operation <span class=c>//... </span></span></span><span class=line><span class=cl><span class=c></span> <span class=c>// Regular PDL goes here. </span></span></span><span class=line><span class=cl><span class=c></span> pdl<span class=p>.</span>rewrite <span class=nv>%0</span> with <span class=s>"transform.dialect"</span> </span></span><span class=line><span class=cl> <span class=p>}</span> </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl> sequence <span class=nv>%arg0</span> failures<span class=p>(</span>propagate<span class=p>)</span> <span class=p>{</span> </span></span><span class=line><span class=cl> <span class=nl>^bb0</span><span class=p>(</span><span class=nv>%arg1</span><span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>):</span> </span></span><span class=line><span class=cl> <span class=nv>%1</span> <span class=p>=</span> pdl_match <span class=nf>@my_pattern</span> in <span class=nv>%arg1</span> </span></span><span class=line><span class=cl> <span class=c>// Use %1 as handle </span></span></span><span class=line><span class=cl><span class=c></span> <span class=p>}</span> </span></span><span class=line><span class=cl><span class=p>}</span> </span></span></code></pre></div><p>Note that the pattern is expected to finish with a <code>pdl.rewrite</code> terminator that points to the custom rewriter named “transform.dialect”. The rewriter actually does nothing, but the transform application will keep track of the operations that matched the pattern.</p><p>This op is expected to contain <code>pdl.pattern</code> operations and exactly one another Transform dialect operation that gets executed with all patterns available. This op is a possible top-level Transform IR op, the argument of its entry block corresponds to either the root op of the payload IR or the ops associated with its operand when provided.</p><p>Traits: <code>NoTerminator</code>, <code>PossibleTopLevelTransformOpTrait</code>, <code>SymbolTable</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>OpAsmOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-60>Operands: <a class=headline-hash href=#operands-60>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>root</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h2 id=structured-linalg-match-operations>Structured (Linalg) Match Operations <a class=headline-hash href=#structured-linalg-match-operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/Linalg/TransformOps/LinalgMatchOps.td>source</a></p><h3 id=transformmatchstructuredbody-transformmatchstructuredbodyop><code>transform.match.structured.body</code> (transform::MatchStructuredBodyOp) <a class=headline-hash href=#transformmatchstructuredbody-transformmatchstructuredbodyop>¶</a></h3><p><em>Checks if the body of the structured op satisfies some criteria</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.structured.body` $operand_handle attr-dict `:` type($operand_handle) </code></pre><p>Checks if the body of the structured payload op satisfies one of the following mutually exclusive criteria specified by attributes:</p><ul><li><p><code>reduction_position</code>: the body of the structured payload op implements a reduction of the <code>n</code>-th operand (<code>n</code> is the value of the attribute) using a single combiner operation;</p></li><li><p><code>passthrough</code>: the body of the structured payload op only forwards inputs to the outputs (copy or broadcast).</p></li><li><p><code>elementwise</code>: the body of the structured payload op represents an elementwise operation.</p></li><li><p><code>contraction</code>: the body of the structured payload op is a contraction of the form <code><red>(<elem>(bbarg0, bbarg1), bbarg2)</code> where <code><elem></code> and <code><red></code> are binary operations whose names are specified in the attribute and operands can be permuted and optionally forwarded through a chain of unary side effect-free operations.</p></li></ul><p>This op can only appear immediately inside a <code>transform.match.structured</code> op and apply to its first block argument because it assumes the payload to have been already checked for being a single structured op.</p><h4 id=return-modes-23>Return modes <a class=headline-hash href=#return-modes-23>¶</a></h4><p>Succeeds if the operation body satisfies the specified criteria, produces a silenceable failure otherwise. Produces a definite failure if the operand is not associated with a single payload op.</p><p>Traits: <code>SingleOpMatcher</code>, <code>StructuredPredicate</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-43>Attributes: <a class=headline-hash href=#attributes-43>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>reduction_position</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>passthrough</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>elementwise</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>contraction</code></td><td>::mlir::ArrayAttr</td><td>string array attribute</td></tr></table><h4 id=operands-61>Operands: <a class=headline-hash href=#operands-61>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>operand_handle</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformmatchstructuredclassify_contraction_dims-transformmatchstructuredclassifycontractiondimsop><code>transform.match.structured.classify_contraction_dims</code> (transform::MatchStructuredClassifyContractionDimsOp) <a class=headline-hash href=#transformmatchstructuredclassify_contraction_dims-transformmatchstructuredclassifycontractiondimsop>¶</a></h3><p><em>Checks if an operation has contraction-like dimensions and returns them</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.structured.classify_contraction_dims` $operand_handle attr-dict `:` functional-type(operands, results) </code></pre><p>Checks if the structured payload op has contraction-like dimensions as follows:</p><p>C(batch, m, n) += A(batch, m, k) * B(batch, k, n)</p><p>That is:</p><ul><li>‘batch’ are parallel dimensions used in inputs and result;</li><li>’m’ are parallel dimensions used in the LHS and result;</li><li>’n’ are parallel dimensions used in rhe RHS and result;</li><li>‘k’ are reduction dimensions present only in LHS and RHS.</li></ul><p>Note that this doesn’t check the operation in the body.</p><p>This op can only appear immediately inside a <code>transform.match.structured</code> op and apply to its first block argument because it assumes the payload to have been already checked for being a single structured op.</p><h4 id=return-modes-24>Return modes <a class=headline-hash href=#return-modes-24>¶</a></h4><p>Succeeds if the operation has the contraction-like dimensions, produces a silenceable failure otherwise.</p><p>Traits: <code>SingleOpMatcher</code>, <code>StructuredPredicate</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-62>Operands: <a class=headline-hash href=#operands-62>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>operand_handle</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-38>Results: <a class=headline-hash href=#results-38>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>batch</code></td><td>TransformParamTypeInterface instance</td></tr><tr><td style=text-align:center><code>m</code></td><td>TransformParamTypeInterface instance</td></tr><tr><td style=text-align:center><code>n</code></td><td>TransformParamTypeInterface instance</td></tr><tr><td style=text-align:center><code>k</code></td><td>TransformParamTypeInterface instance</td></tr></tbody></table><h3 id=transformmatchstructuredclassify_convolution_dims-transformmatchstructuredclassifyconvolutiondimsop><code>transform.match.structured.classify_convolution_dims</code> (transform::MatchStructuredClassifyConvolutionDimsOp) <a class=headline-hash href=#transformmatchstructuredclassify_convolution_dims-transformmatchstructuredclassifyconvolutiondimsop>¶</a></h3><p><em>Checks if an operation has convolution-like dimensions and returns them</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.structured.classify_convolution_dims` $operand_handle attr-dict `:` functional-type(operands, results) </code></pre><p>Checks if the structured payload op has convolution-like dimensions as follows:</p><p>C(batch, depth, oi, oc) += A(batch, depth, oi, ic) * B(fl, depth, ic, oc)</p><p>That is:</p><ul><li>‘batch’ are parallel dimensions used in the input and result;</li><li>‘output_image’ (‘oi’) are parallel dimensions used in the input and result;</li><li>‘output_channel’ (‘oc’) are parallel dimensions used in the filter and result;</li><li>‘filter_loop’ (‘fl’) are reduction dimensions representing the dimensions of the sliding window;</li><li>‘input_channel’ (‘ic’) are reduction dimensions present only in the input and filter.</li><li>‘depth’ (‘ic’) are parallel dimensions present in the input, filter, and output.</li></ul><p>Additionally this will match stride and dilation information for the convolution:</p><ul><li>‘strides’ are the static strides per convolution window dimension;</li><li>‘dilations’ are the static dilations per convolution window dimension.</li></ul><p>Note that this doesn’t check the operation in the body.</p><p>This op can only appear immediately inside a <code>transform.match.structured</code> op and apply to its first block argument because it assumes the payload to have been already checked for being a single structured op.</p><h4 id=return-modes-25>Return modes <a class=headline-hash href=#return-modes-25>¶</a></h4><p>Succeeds if the operation has the convolution-like dimensions, produces a silenceable failure otherwise.</p><p>Traits: <code>SingleOpMatcher</code>, <code>StructuredPredicate</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-63>Operands: <a class=headline-hash href=#operands-63>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>operand_handle</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-39>Results: <a class=headline-hash href=#results-39>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>batch</code></td><td>TransformParamTypeInterface instance</td></tr><tr><td style=text-align:center><code>output_image</code></td><td>TransformParamTypeInterface instance</td></tr><tr><td style=text-align:center><code>output_channel</code></td><td>TransformParamTypeInterface instance</td></tr><tr><td style=text-align:center><code>filter_loop</code></td><td>TransformParamTypeInterface instance</td></tr><tr><td style=text-align:center><code>input_channel</code></td><td>TransformParamTypeInterface instance</td></tr><tr><td style=text-align:center><code>depth</code></td><td>TransformParamTypeInterface instance</td></tr><tr><td style=text-align:center><code>strides</code></td><td>TransformParamTypeInterface instance</td></tr><tr><td style=text-align:center><code>dilations</code></td><td>TransformParamTypeInterface instance</td></tr></tbody></table><h3 id=transformmatchstructureddim-transformmatchstructureddimop><code>transform.match.structured.dim</code> (transform::MatchStructuredDimOp) <a class=headline-hash href=#transformmatchstructureddim-transformmatchstructureddimop>¶</a></h3><p><em>Checks if the dimensions of the structured op satisfy some criteria</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.structured.dim` $operand_handle `[`custom<TransformMatchDims>($raw_dim_list, $is_inverted, $is_all)`]` attr-dict `:` custom<SemiFunctionType>(type($operand_handle), type($result)) </code></pre><p>Checks if the dimensions (loop ranges) of the structured payload op satisfy the criteria specified as attributes. May capture the numeric value of the dimension into a parameter that it returns.</p><p>The following dimension specifications are supported:</p><ul><li><code>all</code>: all dimensions are checked and captured;</li><li>list of integers: the listed dimensions are checked and captured;</li><li><code>except(</code> list of integers <code>)</code>: all dimensions except the specified ones are checked and captured.</li></ul><p>Negative indexes are interpreted by counting values from the last one (similarly to Python). For example, <code>-1</code> means the last dimension and <code>except(-1)</code> means all dimensions but the last. Indexes must be unique, including after interpretation of negative ones.</p><p>Produces a silenceable failure in case of index overflow, including backward counting.</p><p>The following mutually exclusive conditions are available as unit attributes:</p><ul><li><code>parallel</code>: the dimension corresponds to a parallel loop;</li><li><code>reduction</code>: the dimension corresponds to a reduction loop.</li></ul><p>If the result type is specified, associates the parameter with the (static) values of dimensions in the same order as listed and preserving the natural order for <code>all</code> and <code>except</code>. Specifically, if <code>-1, -2</code> are specified, the parameter will be associated with the value of the second-to-last dimension followed by the last dimension. If the dimension is dynamic, the parameter will contain a negative value corresponding to kDynamic in C++.</p><p>This op can only appear immediately inside a <code>transform.match.structured</code> op and apply to its first block argument because it assumes the payload to have been already checked for being a single structured op.</p><h4 id=return-modes-26>Return modes <a class=headline-hash href=#return-modes-26>¶</a></h4><p>Succeeds if the specified dimensions satisfy the specified criteria, produces a silenceable failure otherwise. Produces a definite failure if the operand is not associated with a single payload op.</p><p>Traits: <code>SingleOpMatcher</code>, <code>StructuredPredicate</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-44>Attributes: <a class=headline-hash href=#attributes-44>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>raw_dim_list</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>is_inverted</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>is_all</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>parallel</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>reduction</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-64>Operands: <a class=headline-hash href=#operands-64>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>operand_handle</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-40>Results: <a class=headline-hash href=#results-40>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>TransformParamTypeInterface instance</td></tr></tbody></table><h3 id=transformmatchstructuredelemental_bitwidth-transformmatchstructuredelementalbitwidthop><code>transform.match.structured.elemental_bitwidth</code> (transform::MatchStructuredElementalBitwidthOp) <a class=headline-hash href=#transformmatchstructuredelemental_bitwidth-transformmatchstructuredelementalbitwidthop>¶</a></h3><p><em>Captures the bitwidth of the value’s elemental type as a parameter</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.structured.elemental_bitwidth` $operand_handle attr-dict `:` functional-type(operands, results) </code></pre><p>Produces a transform dialect parameter associated with the bitwidth of the elemental type of the payload value passed as the operand. This op can only appear immediately inside a <code>transform.match.structured</code> op and apply to its first block argument because it assumes the payload to have been already checked for being a single structured op.</p><h4 id=return-modes-27>Return modes <a class=headline-hash href=#return-modes-27>¶</a></h4><p>Succeeds if the operand is associated with exactly one payload value of <code>ShapedType</code>. Produces a silenceable failure otherwise.</p><p>Traits: <code>SingleValueMatcher</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-65>Operands: <a class=headline-hash href=#operands-65>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>operand_handle</code></td><td>TransformValueHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-41>Results: <a class=headline-hash href=#results-41>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>TransformParamTypeInterface instance</td></tr></tbody></table><h3 id=transformmatchstructuredinit-transformmatchstructuredinitop><code>transform.match.structured.init</code> (transform::MatchStructuredInitOp) <a class=headline-hash href=#transformmatchstructuredinit-transformmatchstructuredinitop>¶</a></h3><p><em>Captures init operand(s) of a structured operation</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.structured.init` $operand_handle `[`custom<TransformMatchDims>($raw_position_list, $is_inverted, $is_all)`]` attr-dict `:` custom<SemiFunctionType>(type($operand_handle), type($result)) </code></pre><p>Produces a transform dialect value depending on the result type:</p><ul><li>If the result type is a value handle, it will be associated with the init operand(s) of the payload operation associated with the operand handle.</li><li>If the result type is an operation handle, it will be associated with the operation defining the init operand(s) of the payload operation associated with the operand handle.</li><li>If the result type is an affine map parameter type, it will be associated with the indexing map that corresponds to the init operand(s) of the payload operation associated with the operand handle.</li></ul><p>For example, given the following operation:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%arg3</span> <span class=p>=</span> linalg<span class=p>.</span>fill </span></span><span class=line><span class=cl>linalg<span class=p>.</span>matmul ins<span class=p>(</span><span class=nv>%arg1</span><span class=p>,</span> <span class=nv>%arg2</span> <span class=p>:</span> <span class=p>...)</span> outs<span class=p>(</span><span class=nv>%arg3</span> <span class=p>:</span> <span class=p>...)</span> </span></span></code></pre></div><p>in case of a successful match for init operand 0 this operation will return, for each of the respective cases above:</p><ul><li>A handle to <code>%arg3</code> if the result is a value handle.</li><li>A handle to <code>linalg.fill</code> if the result is an operation handle.</li><li>A parameter containing the result map of the matrix multiplication, i.e. <code>affine_map<(d0, d1, d2) -> (d0, d1)></code> if the result is an affine map parameter.</li></ul><p>The match succeeds if the conditions specified as attributes succeed.</p><p>The following init specifications are supported:</p><ul><li><code>all</code>: all inits are checked and captured;</li><li>list of integers: the listed inits are checked and captured;</li><li><code>except(</code> list of integers <code>)</code>: all inits except the specified ones are checked and captured.</li></ul><p>Negative indexes are interpreted by counting values from the last one (similarly to Python). For example, <code>-1</code> means the last init and <code>except(-1)</code> means all inits but the last. Indexes must be unique, including after interpretation of negative ones.</p><p>Produces a silenceable failure in case of index overflow, including backward counting.</p><p>This op can only appear immediately inside a <code>transform.match.structured</code> op and apply to its first block argument because it assumes the payload to have been already checked for being a single structured op.</p><h4 id=return-modes-28>Return modes <a class=headline-hash href=#return-modes-28>¶</a></h4><p>Succeeds if all init(outs) indexes are in bounds, produces a silenceable failure otherwise. Additionally, when the result is an operation handle, produces a silenceable failure if the init(outs) specification defines more than one init(outs) or if the operand is not an operation result.</p><p>Traits: <code>SingleOpMatcher</code>, <code>StructuredPredicate</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-45>Attributes: <a class=headline-hash href=#attributes-45>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>raw_position_list</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>is_inverted</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>is_all</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>permutation</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>projected_permutation</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-66>Operands: <a class=headline-hash href=#operands-66>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>operand_handle</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-42>Results: <a class=headline-hash href=#results-42>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>transform operation or value handle or</td></tr></tbody></table><h3 id=transformmatchstructuredinput-transformmatchstructuredinputop><code>transform.match.structured.input</code> (transform::MatchStructuredInputOp) <a class=headline-hash href=#transformmatchstructuredinput-transformmatchstructuredinputop>¶</a></h3><p><em>Captures input operand(s) of a structured operation</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.structured.input` $operand_handle `[`custom<TransformMatchDims>($raw_position_list, $is_inverted, $is_all)`]` attr-dict `:` custom<SemiFunctionType>(type($operand_handle), type($result)) </code></pre><p>Produces a transform dialect value depending on the result type:</p><ul><li>If the result type is a value handle, it will be associated with the input operand(s) of the payload operation associated with the operand handle.</li><li>If the result type is an operation handle, it will be associated with the operation defining the input operand(s) of the payload operation associated with the operand handle.</li><li>If the result type is an affine map parameter type, it will be associated with the indexing map that corresponds to the input operand(s) of the payload operation associated with the operand handle.</li></ul><p>For example, given the following operation:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%arg1</span> <span class=p>=</span> some<span class=p>.</span>op </span></span><span class=line><span class=cl>linalg<span class=p>.</span>matmul ins<span class=p>(</span><span class=nv>%arg1</span><span class=p>,</span> <span class=nv>%arg2</span> <span class=p>:</span> <span class=p>...)</span> outs<span class=p>(</span><span class=nv>%arg3</span> <span class=p>:</span> <span class=p>...)</span> </span></span></code></pre></div><p>in case of a successful match for operand 0 this operation will return, for each of the respective cases above:</p><ul><li>A handle to <code>%arg1</code> if the result is a value handle.</li><li>A handle to <code>some.op</code> if the result is an operation handle.</li><li>A parameter containing the LHS map of the matrix multiplication, i.e. <code>affine_map<(d0, d1, d2) -> (d0, d2)></code> if the result is an affine map parameter.</li></ul><p>The match succeeds if the conditions specified as attributes succeed.</p><p>The following input specifications are supported:</p><ul><li><code>all</code>: all inputs are checked and captured;</li><li>list of integers: the listed inputs are checked and captured;</li><li><code>except(</code> list of integers <code>)</code>: all inputs except the specified ones are checked and captured.</li></ul><p>Negative indexes are interpreted by counting values from the last one (similarly to Python). For example, <code>-1</code> means the last input and <code>except(-1)</code> means all inputs but the last. Indexes must be unique, including after interpretation of negative ones.</p><p>Produces a silenceable failure in case of index overflow, including backward counting.</p><p>This op can only appear immediately inside a <code>transform.match.structured</code> op and apply to its first block argument because it assumes the payload to have been already checked for being a single structured op.</p><h4 id=return-modes-29>Return modes <a class=headline-hash href=#return-modes-29>¶</a></h4><p>Succeeds if all input indexes are in bounds, produces a silenceable failure otherwise. Additionally, when the result is an operation handle, produces a silenceable failure if the input specification defines more than one input or if the operand is not an operation result.</p><p>Traits: <code>SingleOpMatcher</code>, <code>StructuredPredicate</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-46>Attributes: <a class=headline-hash href=#attributes-46>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>raw_position_list</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>is_inverted</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>is_all</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>permutation</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>projected_permutation</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-67>Operands: <a class=headline-hash href=#operands-67>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>operand_handle</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-43>Results: <a class=headline-hash href=#results-43>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>transform operation or value handle or</td></tr></tbody></table><h3 id=transformmatchstructurednum_inits-transformmatchstructurednuminitsop><code>transform.match.structured.num_inits</code> (transform::MatchStructuredNumInitsOp) <a class=headline-hash href=#transformmatchstructurednum_inits-transformmatchstructurednuminitsop>¶</a></h3><p><em>Captures the number of init(outs) operands of a structuredoperation as parameter</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.structured.num_inits` $operand_handle attr-dict `:` functional-type(operands, results) </code></pre><p>Produces a transform dialect parameter value associated with an integer attribute containing the number of init(outs) operands of the payload operation associated with the operand handle.</p><p>This op can only appear immediately inside a <code>transform.match.structured</code> op and apply to its first block argument because it assumes the payload to have been already checked for being a single structured op.</p><h4 id=return-modes-30>Return modes <a class=headline-hash href=#return-modes-30>¶</a></h4><p>Succeeds if the operand is associated with exactly one structured payload operation. Produces a silenceable failure otherwise.</p><p>Traits: <code>SingleOpMatcher</code>, <code>StructuredPredicate</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-68>Operands: <a class=headline-hash href=#operands-68>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>operand_handle</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-44>Results: <a class=headline-hash href=#results-44>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>TransformParamTypeInterface instance</td></tr></tbody></table><h3 id=transformmatchstructurednum_inputs-transformmatchstructurednuminputsop><code>transform.match.structured.num_inputs</code> (transform::MatchStructuredNumInputsOp) <a class=headline-hash href=#transformmatchstructurednum_inputs-transformmatchstructurednuminputsop>¶</a></h3><p><em>Captures the number of input operands of a structured operation as parameter</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.structured.num_inputs` $operand_handle attr-dict `:` functional-type(operands, results) </code></pre><p>Produces a transform dialect parameter value associated with an integer attribute containing the number of input operands of the payload operation associated with the operand handle.</p><p>This op can only appear immediately inside a <code>transform.match.structured</code> op and apply to its first block argument because it assumes the payload to have been already checked for being a single structured op.</p><h4 id=return-modes-31>Return modes <a class=headline-hash href=#return-modes-31>¶</a></h4><p>Succeeds if the operand is associated with exactly one structured payload operation. Produces a silenceable failure otherwise.</p><p>Traits: <code>SingleOpMatcher</code>, <code>StructuredPredicate</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-69>Operands: <a class=headline-hash href=#operands-69>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>operand_handle</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-45>Results: <a class=headline-hash href=#results-45>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>TransformParamTypeInterface instance</td></tr></tbody></table><h3 id=transformmatchstructured-transformmatchstructuredop><code>transform.match.structured</code> (transform::MatchStructuredOp) <a class=headline-hash href=#transformmatchstructured-transformmatchstructuredop>¶</a></h3><p><em>Matches a structured (linalg) operation with additional conditions</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.structured` (`failures` `(` $failure_propagation_mode^ `)`)?$current `:` custom<SemiFunctionType>(type($current), type($outputs))attr-dict-with-keyword regions </code></pre><p>Checks if the payload operation associated with the operand handle is a structured operation, that is, an operation that implements <code>LinalgOpInterface</code>, and that all conditions listed in the body of this operation are satisfied. Produces a silenceable failure if the payload operation is not structured.</p><p>The transform operations nested in the body region are applied one by one. If any of them produces a failure, silenceable or definite, the following operations are not applied. If the failure propagation mode is “propagate”, silenceable failures are forwarded as the result of this operation. If it is “suppress”, they are ignored and this operation immediately succeeds. Definite failures are always propagated immediately.</p><p>In case of success, the transform values produced by this operation are associated with the same payload as the operands of the block terminator. If any of the nested operations produced a silenceable failure, regardless of the failure propagation mode, the transform values produced by this operation that correspond to the already defined terminator operands are associated with the same payload as the already defined terminator operands. Other values produced by this operation are associated with empty payloads.</p><p>If the failure propagation mode is not specified, it is considered “propagate” by default. The “suppress” mode can be used to specify optional matches.</p><h4 id=return-modes-32>Return modes <a class=headline-hash href=#return-modes-32>¶</a></h4><p>This operation only reads all operand handles and produces all resulting handles. It succeeds in “propagate” mode if the payload operation is a structured operation and if all the nested operations succeed. It succeeds in “suppress” mode as long as the operand handle is associated with exactly one payload operation. It produces a definite failure when the handle is not associated with exactly one payload operation.</p><p>Traits: <code>SingleBlockImplicitTerminator<::mlir::transform::MatchStructuredYieldOp></code>, <code>SingleBlock</code>, <code>SingleOpMatcher</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-47>Attributes: <a class=headline-hash href=#attributes-47>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>failure_propagation_mode</code></td><td>::mlir::transform::FailurePropagationModeAttr</td><td><details><summary>Silenceable error propagation policy</summary><p>Enum cases:</p><ul><li>propagate (<code>Propagate</code>)</li><li>suppress (<code>Suppress</code>)</li></ul></details></td></tr></table><h4 id=operands-70>Operands: <a class=headline-hash href=#operands-70>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>current</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-46>Results: <a class=headline-hash href=#results-46>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>outputs</code></td><td>variadic of any transform handle or parameter</td></tr></tbody></table><h3 id=transformmatchstructuredrank-transformmatchstructuredrankop><code>transform.match.structured.rank</code> (transform::MatchStructuredRankOp) <a class=headline-hash href=#transformmatchstructuredrank-transformmatchstructuredrankop>¶</a></h3><p><em>Captures the rank of a structured operation as parameter</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.structured.rank` $operand_handle attr-dict `:` custom<SemiFunctionType>(type($operand_handle), type($rank), "false") </code></pre><p>Produces a transform dialect parameter value associated with an integer attribute containing the rank of the structured payload operation associated with the operand handle.</p><p>This op can only appear immediately inside a <code>transform.match.structured</code> op and apply to its first block argument because it assumes the payload to have been already checked for being a single structured op.</p><h4 id=return-modes-33>Return modes <a class=headline-hash href=#return-modes-33>¶</a></h4><p>Succeeds if the operand is associated with exactly one structured payload operation. Produces a silenceable failure otherwise.</p><p>Traits: <code>SingleOpMatcher</code>, <code>StructuredPredicate</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-71>Operands: <a class=headline-hash href=#operands-71>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>operand_handle</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-47>Results: <a class=headline-hash href=#results-47>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>rank</code></td><td>TransformParamTypeInterface instance</td></tr></tbody></table><h3 id=transformmatchstructuredresult-transformmatchstructuredresultop><code>transform.match.structured.result</code> (transform::MatchStructuredResultOp) <a class=headline-hash href=#transformmatchstructuredresult-transformmatchstructuredresultop>¶</a></h3><p><em>Captures the result of a structured payload operation in an op or value handle</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.structured.result` $operand_handle `[` $position `]` (`any` $any^)? (`single` $single^)?attr-dict `:` functional-type(operands, results) </code></pre><p>Produces a transform dialect value handle associated with the payload value defined as a result of the payload operation associated with the operand handle, or an operation handle to an operation using the produced result with additional constraints specified by the attributes as follows.</p><ul><li>If <code>any</code> is specified, binds the resulting handle to any operation using the result and succeeds.</li><li>If <code>single</code> is specified, binds the resulting handle to the only operation using the result or fails if there is more than one (or no) such operation.</li></ul><p>The number of the result is specified as <code>position</code> attribute. It may take positive and negative values. Negative values are interpreted as counting results from backwards, e.g., <code>-1</code> means the last result and <code>-2</code> means the second-to-last result. In any case, the position must be in bounds for the given payload operation. A silenceable failure is produced for out-of-bounds positions.</p><p>This op can only appear immediately inside a <code>transform.match.structured</code> op and apply to its first block argument because it assumes the payload to have been already checked for being a single structured op.</p><h4 id=return-modes-34>Return modes <a class=headline-hash href=#return-modes-34>¶</a></h4><p>Succeeds if the position is in bounds and if the user operation could be found when requested. Produces a silenceable failure otherwise.</p><p>Traits: <code>SingleOpMatcher</code>, <code>StructuredPredicate</code></p><p>Interfaces: <code>MatchOpInterface</code>, <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-48>Attributes: <a class=headline-hash href=#attributes-48>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>position</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>any</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>single</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-72>Operands: <a class=headline-hash href=#operands-72>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>operand_handle</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-48>Results: <a class=headline-hash href=#results-48>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>transform operation or value handle</td></tr></tbody></table><h3 id=transformmatchstructuredyield-transformmatchstructuredyieldop><code>transform.match.structured.yield</code> (transform::MatchStructuredYieldOp) <a class=headline-hash href=#transformmatchstructuredyield-transformmatchstructuredyieldop>¶</a></h3><p><em>Terminator for transform.match.structured blocks</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.match.structured.yield` $handles attr-dict (`:` type($handles)^)? </code></pre><p>Forwards the payload association from the operands to the results of the parent op. Always succeeds.</p><p>Traits: <code>Terminator</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code></p><h4 id=operands-73>Operands: <a class=headline-hash href=#operands-73>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>handles</code></td><td>variadic of any transform handle or parameter</td></tr></tbody></table><h2 id=structured-linalg-transform-operations>Structured (Linalg) Transform Operations <a class=headline-hash href=#structured-linalg-transform-operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/Linalg/TransformOps/LinalgTransformOps.td>source</a></p><h3 id=transformapply_patternslinalgdecompose_pack_unpack-transformapplydecomposetensorpackunpackpatternsop><code>transform.apply_patterns.linalg.decompose_pack_unpack</code> (transform::ApplyDecomposeTensorPackUnpackPatternsOp) <a class=headline-hash href=#transformapply_patternslinalgdecompose_pack_unpack-transformapplydecomposetensorpackunpackpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.linalg.decompose_pack_unpack` attr-dict </code></pre><p>Collect patterns to decompose tensor.pack and tensor.unpack into e.g. tensor::PadOp, linalg::transposeOp Ops. Requires all outer dims to be unit.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternslinalgerase_unnecessary_inputs-transformapplyeraseunnecessaryinputspatternsop><code>transform.apply_patterns.linalg.erase_unnecessary_inputs</code> (transform::ApplyEraseUnnecessaryInputsPatternsOp) <a class=headline-hash href=#transformapply_patternslinalgerase_unnecessary_inputs-transformapplyeraseunnecessaryinputspatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.linalg.erase_unnecessary_inputs` attr-dict </code></pre><p>Collects patterns that promote inputs to outputs and remove unused inputs of <code>linalg.generic</code> ops.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternslinalgfold_add_into_dest-transformapplyfoldaddintodestpatternsop><code>transform.apply_patterns.linalg.fold_add_into_dest</code> (transform::ApplyFoldAddIntoDestPatternsOp) <a class=headline-hash href=#transformapply_patternslinalgfold_add_into_dest-transformapplyfoldaddintodestpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.linalg.fold_add_into_dest` attr-dict </code></pre><p>Collects patterns to replace linalg.add when destination passing suffices for achieving the sum.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternslinalgfold_unit_extent_dims_via_reshapes-transformapplyfoldunitextentdimsviareshapespatternsop><code>transform.apply_patterns.linalg.fold_unit_extent_dims_via_reshapes</code> (transform::ApplyFoldUnitExtentDimsViaReshapesPatternsOp) <a class=headline-hash href=#transformapply_patternslinalgfold_unit_extent_dims_via_reshapes-transformapplyfoldunitextentdimsviareshapespatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.linalg.fold_unit_extent_dims_via_reshapes` attr-dict </code></pre><p>Collects patterns to fold unit-extent dimensions in operands/results of linalg ops on tensors via reassociative reshape ops.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternslinalgfold_unit_extent_dims_via_slices-transformapplyfoldunitextentdimsviaslicespatternsop><code>transform.apply_patterns.linalg.fold_unit_extent_dims_via_slices</code> (transform::ApplyFoldUnitExtentDimsViaSlicesPatternsOp) <a class=headline-hash href=#transformapply_patternslinalgfold_unit_extent_dims_via_slices-transformapplyfoldunitextentdimsviaslicespatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.linalg.fold_unit_extent_dims_via_slices` attr-dict </code></pre><p>Collects patterns to fold unit-extent dimensions in operands/results of linalg ops on tensors via rank-reducing slices.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternslinalgpad_vectorization-transformapplypadvectorizationpatternsop><code>transform.apply_patterns.linalg.pad_vectorization</code> (transform::ApplyPadVectorizationPatternsOp) <a class=headline-hash href=#transformapply_patternslinalgpad_vectorization-transformapplypadvectorizationpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.linalg.pad_vectorization` attr-dict </code></pre><p>Apply patterns that vectorize tensor.pad.</p><p>These patterns rewrite tensor.pad Ops using vector.transfer_read and vector.transfer_write operations. This is done either by:</p><ol><li>Folding tensor.pad with an existing vector.transfer_read / vector.transfer_write Op (generated prior to running these patterns).</li><li>Rewriting it (when matched together with q tensor.insert_slice consumer Op) as a vector.transfer_read + vector.transfer_write pair.</li></ol><p>In both cases, these patterns look at producers and consumers for the matched tensor.pad Op to find opportunities for vectorization.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternslinalgtiling_canonicalization-transformapplytilingcanonicalizationpatternsop><code>transform.apply_patterns.linalg.tiling_canonicalization</code> (transform::ApplyTilingCanonicalizationPatternsOp) <a class=headline-hash href=#transformapply_patternslinalgtiling_canonicalization-transformapplytilingcanonicalizationpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.linalg.tiling_canonicalization` attr-dict </code></pre><p>Collects canonicalization patterns relevant to apply after tiling patterns.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformstructuredbufferize_to_allocation-transformbufferizetoallocationop><code>transform.structured.bufferize_to_allocation</code> (transform::BufferizeToAllocationOp) <a class=headline-hash href=#transformstructuredbufferize_to_allocation-transformbufferizetoallocationop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.bufferize_to_allocation` $target attr-dict `:` type($target) </code></pre><p>This transform bufferizes the targeted operation and materializes the result in a new allocation. It replaces all original uses of the target result with the newly allocated buffer, wrapped in a <code>bufferization.to_tensor</code> op. It returns a handle to the newly allocated buffer. Furthermore, it returns a handle that is mapped to all newly created ops.</p><p>Only bufferizable ops are that bufferize to a memory write or have an aliasing OpOperand (and do not themselves bufferize to an allocation) are supported. They are bufferized using their BufferizableOpInterface implementation. E.g.:</p><pre tabindex=0><code>%0 = tensor.insert %f into %dest[%pos] : tensor<10xf32> </code></pre><p>Is bufferized to:</p><pre tabindex=0><code>%alloc = memref.alloc() : memref<10xf32> bufferization.materialize_in_destination %dest in %alloc memref.store %f, %alloc[%pos] : memref<10xf32> %0 = bufferization.to_tensor %alloc restrict writable : memref<10xf32> </code></pre><p>Selected ops that bufferize to an allocation (or need special handling) are also supported:</p><ul><li><code>tensor.pad</code> is lowered to an allocation, followed by a <code>linalg.fill</code> and and a buffer copy (all on memrefs).</li><li><code>vector.mask</code> is bufferized together with its region. The allocation is placed in front of the <code>vector.mask</code> op.</li></ul><p>An optional memory space attribute can be specified for the materialized buffer allocation.</p><p>If a memory copy is needed, a “bufferization.materialize_in_destination” is used when possible. This is an op with tensor semantics that will bufferize to a memory copy later. Which concrete op will be used for the memory copy is up to the bufferization framework. Alternatively, a custom memcpy op can be specified via <code>memcpy_op</code>. Currently supported are “memref.copy” and “linalg.copy”. In that case, the source of each memcpy must not have a custom memory space. Furthermore, because the future buffer layout unknown for a given tensor, a fully dynamic layout is assumed for best compatibility. Users should use “bufferization.materialize_in_destination” when possible.</p><p>“memref.alloc” is used for new buffer allocations. The buffer is deallocated at the end of the block if the “emit_dealloc” attribute is present. If this attribute is not present, the allocated memory will be leaked. However, running the <code>-buffer-deallocation-pipeline</code> after all bufferization is done will properly insert the corresponding deallocation(s). Custom allocation ops can be specified via <code>alloc_op</code>. Currently supported are “memref.alloc” and “memref.alloca”. In case of a “memref.alloca”, the buffer is not deallocated.</p><p>If <code>bufferize_destination_only</code> is set, only the destination operands of the op are bufferized to a new memory allocation, but not the op itself.</p><h4 id=return-modes-35>Return modes <a class=headline-hash href=#return-modes-35>¶</a></h4><p>This operation consumes the <code>target</code> handle and produces the <code>allocated_buffer</code> and <code>new_ops</code> handles. It always succeeds.</p><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-49>Attributes: <a class=headline-hash href=#attributes-49>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>memory_space</code></td><td>::mlir::Attribute</td><td>any attribute</td></tr><tr><td><code>memcpy_op</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr><tr><td><code>alloc_op</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr><tr><td><code>bufferize_destination_only</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>emit_dealloc</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-74>Operands: <a class=headline-hash href=#operands-74>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-49>Results: <a class=headline-hash href=#results-49>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>allocated_buffer</code></td><td></td></tr><tr><td style=text-align:center><code>new_ops</code></td><td></td></tr></tbody></table><h3 id=transformstructuredcontinuous_tile_sizes-transformcontinuoustilesizesop><code>transform.structured.continuous_tile_sizes</code> (transform::ContinuousTileSizesOp) <a class=headline-hash href=#transformstructuredcontinuous_tile_sizes-transformcontinuoustilesizesop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.continuous_tile_sizes` $target attr-dict `:` custom<ContinuousTileSizeTypes>(type($target), type($tile_sizes), type($chunk_sizes)) </code></pre><p>This transform emits the IR computing the list of (1) exponentially diminishing tile sizes that are powers of 2; and (2) the corresponding chunk-sizes the target op should be split into along the given dimension.</p><p>For example, for <code>target_size</code> 9, and <code>dimension</code> 0 for the following linalg op as target</p><pre tabindex=0><code> %0 = linalg.matmul ins(%arg0, %arg1: tensor<25x34xf32>, tensor<34x25xf32>) outs(%arg2: tensor<25x25xf32>) </code></pre><p>the first result <code>tile_sizes</code> will be a list of diminishing tile sizes 9, 4, 2, 1; and the second result will be a list of chunk sizes 18, 4, 2, 1 that the corresponding dimension should be split into.</p><p>After the target op has been split along the given dimension (for example using multiway split), each chunk can be tiled with the corresponding tile size in the <code>tile_sizes</code> list generated as a result of this op.</p><p>Specifying the output type as !transform.param<i64> will cause <code>tile_sizes</code> and <code>chunk_sizes</code> to be computed statically and not dynamically.</p><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-50>Attributes: <a class=headline-hash href=#attributes-50>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>dimension</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute whose value is non-negative</td></tr><tr><td><code>target_size</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute whose value is non-negative</td></tr></table><h4 id=operands-75>Operands: <a class=headline-hash href=#operands-75>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-50>Results: <a class=headline-hash href=#results-50>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tile_sizes</code></td><td>transform any param type or any handle type</td></tr><tr><td style=text-align:center><code>chunk_sizes</code></td><td>transform any param type or any handle type</td></tr></tbody></table><h3 id=transformstructuredconvert_conv2d_to_img2col-transformconvertconv2dtoimg2colop><code>transform.structured.convert_conv2d_to_img2col</code> (transform::ConvertConv2DToImg2ColOp) <a class=headline-hash href=#transformstructuredconvert_conv2d_to_img2col-transformconvertconv2dtoimg2colop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.convert_conv2d_to_img2col` $target attr-dict `:` functional-type($target, results) </code></pre><p>Convert linalg.conv_2d_xxx into linalg.generic (for img2col packing) and linalg.matmul.</p><p>A convolution operation can be written as a matrix-matrix multiplication by unfolding the cross-correlation between input and filter and explicitly copy overlapped sliding window inputs.</p><p>Consider 2D input X with single channel input and output and 2x2 filter W:</p><pre tabindex=0><code>[x(0, 0) , x(0, 1) , ..., x(0, n) ] [x(1, 0) , x(1, 1) , ..., x(1, n) ] [. , . ,. , . ] [w(0, 0), w(0, 1)] [. , . , . , . ] (conv) [w(1, 0), w(1, 1)] [. , . , ., . ] [x(n-1, 0), x(n-1, 1), ..., x(n-1, n-1)] </code></pre><p>The packed input data (img2col) is a matrix with |rows| = output spatial size, |columns| = filter spatial size. To compute the output Y(i, j) we need to calculate the dot product between filter window at input X(x, y)) and the filter which will look like the following where r.h.s is the img2col matrix and l.h.s is the flattned filter:</p><pre tabindex=0><code>[x(0,0), x(0,1), x(1,0), x(1,1)] [x(0,1), x(1,1), x(0,2), x(1,2)] (matmul) [w(0,0), w(0,1), w(1,0), w(1,1)] [x(0,1), x(1,1), x(0,2), x(1,2)] [ . , . , . , . ] </code></pre><p>In general for 2D case with (N, H, W, C) input and (Kh, Kw, C, D) filter and output (N, Ho, Wo, D) the convolution is the following matrix-matrix multiplication (Ho x Wo, Kh x Kw x C) * (Kh x Kw x C, D) for each input in the N input. For the case where N > 1 its a batched matrxi-matrix multplication.</p><p>Returns two handles:</p><ul><li>One on the operation that produces the img2col tensor.</li><li>One on the final operation of the sequence that replaces the original convolution.</li></ul><h4 id=return-modes-36>Return modes: <a class=headline-hash href=#return-modes-36>¶</a></h4><p>Returns a definite failure if target is not isolated from above. Returns a silenceable failure if the pattern application failed.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-76>Operands: <a class=headline-hash href=#operands-76>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-51>Results: <a class=headline-hash href=#results-51>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>img2col_tensor</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredconvert_to_loops-transformconverttoloopsop><code>transform.structured.convert_to_loops</code> (transform::ConvertToLoopsOp) <a class=headline-hash href=#transformstructuredconvert_to_loops-transformconverttoloopsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.convert_to_loops` $target attr-dict `:` functional-type(operands, results) </code></pre><p>For operations that implement the <code>TilingInterface</code>, and implement the <code>generateScalarImplementation</code> method, lowers the operation to loops. The return handle points to all generated loops. Fails if the payload ops cannot be lowered to loops.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-77>Operands: <a class=headline-hash href=#operands-77>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-52>Results: <a class=headline-hash href=#results-52>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructureddecompose_interface-transformdecomposeinterfaceop><code>transform.structured.decompose_interface</code> (transform::DecomposeInterfaceOp) <a class=headline-hash href=#transformstructureddecompose_interface-transformdecomposeinterfaceop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.decompose_interface` $target attr-dict `:` functional-type(operands, results) </code></pre><p>TODO</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-78>Operands: <a class=headline-hash href=#operands-78>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-53>Results: <a class=headline-hash href=#results-53>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructureddecompose-transformdecomposeop><code>transform.structured.decompose</code> (transform::DecomposeOp) <a class=headline-hash href=#transformstructureddecompose-transformdecomposeop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.decompose` $target attr-dict `:` functional-type(operands, results) </code></pre><p>Decomposes named complex operations, such as higher-dimensional (depthwise) convolutions, into combinations of lower-dimensional equivalents when possible.</p><h4 id=return-modes-37>Return modes <a class=headline-hash href=#return-modes-37>¶</a></h4><p>This operation ignores non-Linalg ops and drops them in the return. If all the operations referred to by the <code>target</code> handle decompose properly, the transform succeeds. Otherwise the transform produces a silenceable failure. The return handle points to only the subset of successfully produced computational operations, which can be empty.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-79>Operands: <a class=headline-hash href=#operands-79>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-54>Results: <a class=headline-hash href=#results-54>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructureddecompose_winograd_op-transformdecomposewinogradop><code>transform.structured.decompose_winograd_op</code> (transform::DecomposeWinogradOp) <a class=headline-hash href=#transformstructureddecompose_winograd_op-transformdecomposewinogradop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.decompose_winograd_op` $target attr-dict `:` functional-type($target, results) </code></pre><p>Decompose winograd operations. It will convert filter, input and output transform operations into a combination of scf, tensor, and linalg equivalent operations. Before applying this transform operations, users need to tile winograd transform operations into supported sizes.</p><h4 id=return-modes-38>Return modes: <a class=headline-hash href=#return-modes-38>¶</a></h4><p>This operation fails if <code>target</code> is unsupported. Otherwise, the operation succeeds and returns a handle of the sequence that replaces the original operations.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-80>Operands: <a class=headline-hash href=#operands-80>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-55>Results: <a class=headline-hash href=#results-55>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredeliminate_empty_tensors-transformeliminatelinalgopanchoredemptytensorsop><code>transform.structured.eliminate_empty_tensors</code> (transform::EliminateLinalgOpAnchoredEmptyTensorsOp) <a class=headline-hash href=#transformstructuredeliminate_empty_tensors-transformeliminatelinalgopanchoredemptytensorsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.eliminate_empty_tensors` $target attr-dict `:` type($target) </code></pre><p>Try to eliminate all <code>tensor.empty</code> op uses that are anchored on a LinalgOp within the targeted op.</p><p>This op is similar to <code>bufferization.eliminate_empty_tensors</code>, but specific to LinalgOps.</p><p><code>tensor.empty</code> ops cannot be bufferized. They can either be converted to <code>bufferization.alloc_tensor</code> or replaced with another tensor (via this transform). <code>tensor.empty</code> does not specify the contents of the returned tensor so their results can be replaced with arbitrary tensor values as long as the dimensions match.</p><p>This transform looks for <code>tensor.empty</code> ops where the SSA use-def chain of the result ends in a supported LinalgOp (always following the aliasing OpOperand/OpResult chain). The following LinalgOps are supported:</p><ul><li>Only parallel iterator types.</li><li>The use-def chain ends in an input operand of the LinalgOp.</li><li>The LinalgOp has an unused output operand with the same shape and indexing map.</li></ul><p>Example:</p><pre tabindex=0><code>%0 = tensor.empty() %1 = linalg.matmul ins(...) outs(%0) %2 = linalg.generic ins(%1) outs(%dest) { ^bb0(%in: f32, %out: f32): // out not used } </code></pre><p>Is rewritten with:</p><pre tabindex=0><code>%0 = tensor.empty() %1 = linalg.matmul ins(...) outs(%dest) %2 = linalg.generic ins(%0) outs(%1) { ^bb0(%in: f32, %out: f32): // Use %out instead of %in } </code></pre><p>After this transformation, the “ins” operand has no uses inside the body of the LinalgOp and can be folded away with existing cleanup patterns. Afterwards, the tensor::EmptyOp can also fold away, so that the example can bufferize without an allocation (in the absence of other conflicts).</p><h4 id=return-modes-39>Return modes <a class=headline-hash href=#return-modes-39>¶</a></h4><p>This transform reads the target handle and modifies the payload. It does not produce any handle.</p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-81>Operands: <a class=headline-hash href=#operands-81>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredflatten_elementwise-transformflattenelementwiselinalgop><code>transform.structured.flatten_elementwise</code> (transform::FlattenElementwiseLinalgOp) <a class=headline-hash href=#transformstructuredflatten_elementwise-transformflattenelementwiselinalgop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.flatten_elementwise` $target attr-dict `:` functional-type($target, results) </code></pre><p>Flattens the iteration space and (applicable) operands of elementwise linalg ops to a single dimension.</p><p>Returns one handle:</p><ul><li>Flattened linalg operation.</li></ul><h4 id=return-modes-40>Return modes: <a class=headline-hash href=#return-modes-40>¶</a></h4><p>Returns a definite failure if target is not isolated from above. Returns a silenceable failure if the pattern application failed.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-82>Operands: <a class=headline-hash href=#operands-82>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-56>Results: <a class=headline-hash href=#results-56>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredfuse_into_containing_op-transformfuseintocontainingop><code>transform.structured.fuse_into_containing_op</code> (transform::FuseIntoContainingOp) <a class=headline-hash href=#transformstructuredfuse_into_containing_op-transformfuseintocontainingop>¶</a></h3><p><em>Fuse a producer into a containing operation.</em></p><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.fuse_into_containing_op` $producer_op `into` $containing_op attr-dict `:` functional-type(operands, results) </code></pre><p>Fuses the <code>producer_op</code> into the <code>containing_op</code>. Returns a handle to the fused ops and the <code>new_containing_op</code>.</p><p>The producer is typically a slice of a tileable op (i.e., implements TilingInterface). In that case, this transform computes the accessed producer slice inside of the containing op (“tile and fuse”) and if required, creates a new containing op with outputs from the fused producer. Otherwise, the entire producer is cloned inside the containing op (“clone and fuse”).</p><p>The containing op handle must be associated with exactly one payload op. The producer op handle may be associated with multiple payload ops. This transform fuses producers one-by-one, always picking an unspecified producer that has at least one use inside the containing op among the producers. A producer can be listed multiple times in the handle.</p><p>Note: If a producer has multiple uses inside the containing op, it is currently tiled and/or cloned multiple times into the containing op. TODO: Reuse already fused OpResults instead of tiling/cloning a second time when possible. Fuse producers according to a topological sorting to achieve the largest amount of reuse.</p><h4 id=return-modes-41>Return modes <a class=headline-hash href=#return-modes-41>¶</a></h4><p>If at least one producer could not be fused, this operation produces a silenceable failure. This is the case when tiling fails or when no producer op could be found among the remaining producers that has at least one use within the containing op. I.e., “producers” that are not consumed within the containing op are rejected by this operation.</p><p>This operation consumes the producer handle. This operation only reads the containing op handle.</p><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-83>Operands: <a class=headline-hash href=#operands-83>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>producer_op</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>containing_op</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-57>Results: <a class=headline-hash href=#results-57>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>fused_op</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>new_containing_op</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredfuse-transformfuseop><code>transform.structured.fuse</code> (transform::FuseOp) <a class=headline-hash href=#transformstructuredfuse-transformfuseop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.fuse` $target ($tile_sizes^)? (`interchange` $tile_interchange^)? (`apply_cleanup` `=` $apply_cleanup^)? attr-dict `:` functional-type(operands, results) </code></pre><p>Tiles the operations pointed to by the target handle and fuses their producers greedily using the options provided as attributes.</p><p>If <code>apply_cleanup</code> is true then slice canonicalization is applied between fusion steps.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-51>Attributes: <a class=headline-hash href=#attributes-51>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>tile_sizes</code></td><td>::mlir::ArrayAttr</td><td>64-bit integer array attribute</td></tr><tr><td><code>tile_interchange</code></td><td>::mlir::ArrayAttr</td><td>64-bit integer array attribute</td></tr><tr><td><code>apply_cleanup</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr></table><h4 id=operands-84>Operands: <a class=headline-hash href=#operands-84>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-58>Results: <a class=headline-hash href=#results-58>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>loops</code></td><td>variadic of TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredgeneralize-transformgeneralizeop><code>transform.structured.generalize</code> (transform::GeneralizeOp) <a class=headline-hash href=#transformstructuredgeneralize-transformgeneralizeop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.generalize` $target attr-dict `:` custom<SemiFunctionType>(type($target), type($transformed), "false") </code></pre><p>Transforms a named structured operation into the generic form with the explicit attached region.</p><h4 id=return-modes-42>Return modes <a class=headline-hash href=#return-modes-42>¶</a></h4><p>This operation ignores non-Linalg ops and drops them in the return. If all the operations referred to by the <code>target</code> handle generalize properly, the transform succeeds. Otherwise the transform produces a silenceable failure. The return handle points to only the subset of successfully produced equivalent generic operations, which can be empty or contain the original ops if they were already in generic form.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-85>Operands: <a class=headline-hash href=#operands-85>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-59>Results: <a class=headline-hash href=#results-59>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredhoist_padbuild_packing_loop_nest-transformhoistpadbuildpackingloopnestop><code>transform.structured.hoist_pad.build_packing_loop_nest</code> (transform::HoistPadBuildPackingLoopNestOp) <a class=headline-hash href=#transformstructuredhoist_padbuild_packing_loop_nest-transformhoistpadbuildpackingloopnestop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.hoist_pad.build_packing_loop_nest` $target `above` $loop (`,` `transpose` `by` $transpose^)? attr-dict `:` functional-type(operands, results) </code></pre><p>Helper transform used to hoist a tensor.pad target operation. This operation creates the packing loop nest required by the hoist_pad operation and makes that functionality available independently.</p><p>TODO: In the future, we should consider rewriting as a tensor.pack after hoisting since this abstraction is now available.</p><h4 id=return-modes-43>Return modes <a class=headline-hash href=#return-modes-43>¶</a></h4><p>This operation ignores non-tensor.pad ops and drops them in the result. If any non-tensor.pad is passed, the transform emits a silenceable failure.</p><p>The return handle points to only the subset of successfully created packing loop nests, which can be empty.</p><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-52>Attributes: <a class=headline-hash href=#attributes-52>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>transpose</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr></table><h4 id=operands-86>Operands: <a class=headline-hash href=#operands-86>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>loop</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-60>Results: <a class=headline-hash href=#results-60>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>packing_loop</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredhoist_pad-transformhoistpadop><code>transform.structured.hoist_pad</code> (transform::HoistPadOp) <a class=headline-hash href=#transformstructuredhoist_pad-transformhoistpadop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.hoist_pad` $target `by` $num_loops `loops` (`,` `transpose` `by` $transpose^)? attr-dict `:` functional-type(operands, results) </code></pre><p>Hoist the tensor.pad target operation by at most the given number of loops. Optionally apply the transpose attribute to the inner dimensions.</p><p>TODO: In the future, we should consider rewriting as a tensor.pack after hoisting since this abstraction is now available. TODO: Maybe also return the linalg.generic transpose created at some point.</p><h4 id=return-modes-44>Return modes <a class=headline-hash href=#return-modes-44>¶</a></h4><p>This operation ignores non-tensor.pad ops and drops them in the result. If any non-tensor.pad is passed, the transform emits a silenceable failure.</p><p>If all the operations referred to by the <code>target</code> handle padproperly, the transform succeeds. Otherwise the transform produces a silenceable failure.</p><p>The return handle points to only the subset of successfully hoisted tensor.pad operations, which can be empty.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-53>Attributes: <a class=headline-hash href=#attributes-53>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>num_loops</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>transpose</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr></table><h4 id=operands-87>Operands: <a class=headline-hash href=#operands-87>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-61>Results: <a class=headline-hash href=#results-61>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredhoist_redundant_vector_broadcasts-transformhoistredundantvectorbroadcastsop><code>transform.structured.hoist_redundant_vector_broadcasts</code> (transform::HoistRedundantVectorBroadcastsOp) <a class=headline-hash href=#transformstructuredhoist_redundant_vector_broadcasts-transformhoistredundantvectorbroadcastsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.hoist_redundant_vector_broadcasts` $target attr-dict `:` functional-type(operands, results) </code></pre><p>Hoist vector.extract / vector.broadcasts pairs out of immediately enclosing scf::ForOp iteratively.</p><h4 id=return-modes-45>Return modes: <a class=headline-hash href=#return-modes-45>¶</a></h4><p>The operation always succeeds and returns a handle to the transformed function op.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-88>Operands: <a class=headline-hash href=#operands-88>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-62>Results: <a class=headline-hash href=#results-62>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredhoist_redundant_vector_transfers-transformhoistredundantvectortransfersop><code>transform.structured.hoist_redundant_vector_transfers</code> (transform::HoistRedundantVectorTransfersOp) <a class=headline-hash href=#transformstructuredhoist_redundant_vector_transfers-transformhoistredundantvectortransfersop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.hoist_redundant_vector_transfers` $target attr-dict `:` functional-type(operands, results) </code></pre><p>Hoist vector.transfer_read / vector.transfer_write pairs out of immediately enclosing scf::ForOp iteratively, if the following conditions are true:</p><ol><li>The 2 ops access the same memref with the same indices.</li><li>All operands are invariant under the enclosing scf::ForOp.</li><li>No uses of the memref either dominate the transfer_read or are dominated by the transfer_write (i.e. no aliasing between the write and the read across the loop)</li></ol><p>WARNING: This hoisting does not model parallelism and is generally incorrect when used on distributed loops with memref semantics! TODO: obsolete and should be retired.</p><h4 id=return-modes-46>Return modes: <a class=headline-hash href=#return-modes-46>¶</a></h4><p>The operation always succeeds and returns a handle to the transformed function op.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-54>Attributes: <a class=headline-hash href=#attributes-54>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>verify_non_zero_trip</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-89>Operands: <a class=headline-hash href=#operands-89>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-63>Results: <a class=headline-hash href=#results-63>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredinsert_slice_to_copy-transforminsertslicetocopyop><code>transform.structured.insert_slice_to_copy</code> (transform::InsertSliceToCopyOp) <a class=headline-hash href=#transformstructuredinsert_slice_to_copy-transforminsertslicetocopyop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.insert_slice_to_copy` $target attr-dict `:` functional-type(operands, results) </code></pre><p>Targeted rewrite of an tensor.insert_slice to linalg.copy. This is useful to materialize copies explicitly before bufferization and transform them, avoiding the need to rediscover them after bufferization.</p><p>If the insert_slice source is already a linalg.copy, only return the source op (i.e. do not create an additional linalg.copy op).</p><h4 id=return-modes-47>Return modes: <a class=headline-hash href=#return-modes-47>¶</a></h4><p>The operation always succeeds and returns a handle to the relevant linalg.copy op.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-90>Operands: <a class=headline-hash href=#operands-90>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-64>Results: <a class=headline-hash href=#results-64>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredinterchange-transforminterchangeop><code>transform.structured.interchange</code> (transform::InterchangeOp) <a class=headline-hash href=#transformstructuredinterchange-transforminterchangeop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.interchange` $target (`iterator_interchange` `=` $iterator_interchange^)? attr-dict `:` custom<SemiFunctionType>(type($target), type($transformed), "false") </code></pre><p>Interchanges the iterators of the operations pointed to by the target handle using the iterator interchange attribute.</p><h4 id=return-modes-48>Return modes <a class=headline-hash href=#return-modes-48>¶</a></h4><p>This operation ignores non-linalg::Generic ops and drops them in the return. This operation fails if the interchange attribute is invalid. If all the operations referred to by the <code>target</code> handle interchange properly, the transform succeeds. If any interchange fails, the transform produces a definite failure. The return handle points to only the subset of successfully produced interchanged operations, which can be empty.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-55>Attributes: <a class=headline-hash href=#attributes-55>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>iterator_interchange</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute whose value is non-negative</td></tr></table><h4 id=operands-91>Operands: <a class=headline-hash href=#operands-91>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-65>Results: <a class=headline-hash href=#results-65>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredlower_pack-transformlowerpackop><code>transform.structured.lower_pack</code> (transform::LowerPackOp) <a class=headline-hash href=#transformstructuredlower_pack-transformlowerpackop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.lower_pack` $target attr-dict `:` functional-type(operands, results) </code></pre><p>Rewrite a tensor.pack into tensor.pad + tensor.expand_shape + linalg.transpose.</p><h4 id=return-modes-49>Return modes <a class=headline-hash href=#return-modes-49>¶</a></h4><p>This operation ignores non-pack ops and drops them in the return. This operation produces a silenceable failure if the rewrite fails for any reason. If all the operations referred to by the <code>target</code> are rewritten, the transform succeeds. Return handles to the newly produced pad, expand_shape and transpose ops.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-92>Operands: <a class=headline-hash href=#operands-92>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>Transform IR handle to tensor.pack operations</td></tr></tbody></table><h4 id=results-66>Results: <a class=headline-hash href=#results-66>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>pad_op</code></td><td>Transform IR handle to tensor.pad operations</td></tr><tr><td style=text-align:center><code>expand_shape_op</code></td><td>Transform IR handle to tensor.expand_shape operations</td></tr><tr><td style=text-align:center><code>transpose_op</code></td><td>Transform IR handle to linalg.transpose operations</td></tr></tbody></table><h3 id=transformstructuredlower_unpack-transformlowerunpackop><code>transform.structured.lower_unpack</code> (transform::LowerUnPackOp) <a class=headline-hash href=#transformstructuredlower_unpack-transformlowerunpackop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.lower_unpack` $target attr-dict `:` functional-type(operands, results) </code></pre><p>Lower a tensor.unpack into empty + linalg.transpose + tensor.collapse_shape + tensor.extract_slice.</p><h4 id=return-modes-50>Return modes <a class=headline-hash href=#return-modes-50>¶</a></h4><p>This operation ignores non-unpack ops and drops them in the return. This operation produces a silenceable failure if the rewrite fails for any reason. If all the operations referred to by the <code>target</code> are rewritten, the transform succeeds. Return handles to the newly produced empty, transpose, collapse_shape and extract_slice ops.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-93>Operands: <a class=headline-hash href=#operands-93>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>Transform IR handle to tensor.unpack operations</td></tr></tbody></table><h4 id=results-67>Results: <a class=headline-hash href=#results-67>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>empty_op</code></td><td>Transform IR handle to tensor.empty operations</td></tr><tr><td style=text-align:center><code>transpose_op</code></td><td>Transform IR handle to linalg.transpose operations</td></tr><tr><td style=text-align:center><code>collapse_shape_op</code></td><td>Transform IR handle to tensor.collapse_shape operations</td></tr><tr><td style=text-align:center><code>extract_slice_op</code></td><td>Transform IR handle to tensor.extract_slice operations</td></tr></tbody></table><h3 id=transformstructuredgpumap_copy_to_threads-transformmapcopytothreadsop><code>transform.structured.gpu.map_copy_to_threads</code> (transform::MapCopyToThreadsOp) <a class=headline-hash href=#transformstructuredgpumap_copy_to_threads-transformmapcopytothreadsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.gpu.map_copy_to_threads` $target `total_num_threads` `=` $total_num_threads `desired_bit_alignment` `=` $desired_bit_alignment attr-dict `:` functional-type(operands, results) </code></pre><p>Targeted mapping of a linalg.copy / tensor.pad operation on tensors to a GPU thread mapping.</p><p>This operation implements a greedy heuristic that determines a good distribution of threads to break down the copy/pad operation into. The heuristic is driven by considerations related to the underlying architecture for which good high-level decisions are needed assuming certain hardware features. Relevant features are exposed via first-class attributes to control the behavior of the transformation at a high level.</p><p>For now, a single heuristic is implemented and can be extended on a per-need basis.</p><h4 id=return-modes-51>Return modes <a class=headline-hash href=#return-modes-51>¶</a></h4><p>This operation fails definitely if there is an unsupported op (i.e., not linalg.copy / tensor.pad) among the targeted op. Otherwise, the operation always succeeds and returns a handle to the relevant tiled linalg.copy / tensor.pad op and the enclosing scf.forall op.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-56>Attributes: <a class=headline-hash href=#attributes-56>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>total_num_threads</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>desired_bit_alignment</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr></table><h4 id=operands-94>Operands: <a class=headline-hash href=#operands-94>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-68>Results: <a class=headline-hash href=#results-68>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>forall_op</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>tiled_op</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredmatch-transformmatchop><code>transform.structured.match</code> (transform::MatchOp) <a class=headline-hash href=#transformstructuredmatch-transformmatchop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.match` (`ops` `{` $ops^ `}`)? (`interface` `{` $interface^ `}`)? (`attributes` $op_attrs^)? (`filter_result_type` `=` $filter_result_type^)? (`filter_operand_types` `=` $filter_operand_types^)? `in` $target attr-dict `:` functional-type($target, results) </code></pre><p>Match op with the specified constraints, within the target op.</p><p>The following constraints are supported:</p><ul><li>interface: an optional MatchInterfaceEnum specifying an enum representation for an interface to target.</li><li>ops: an optional StrArrayAttr specifying the concrete name of an op. Multiple names can be specified. Matched ops must have one of specified names.</li><li>attribute: the matched op must have all specified attributes (with their specified values).</li><li>filter_result_type: the matched op must return exactly this one type.</li><li>filter_operand_types: all the operands of the matched op must must be of this type. If more than a type is specified, then the length of the list must be equal to the number of operands in the matched op, and the match will succeed only if the operand types match all the types in the list in the order in which they are specified.</li></ul><p>Note: Only ops that satisfy all specified constraints are matched.</p><p>TODO: Extend with regions to allow a limited form of constraints.</p><h4 id=return-modes-52>Return modes <a class=headline-hash href=#return-modes-52>¶</a></h4><p>This op traverses the ops nested under <code>target</code> and returns the handles to all the operations that match the requirements.</p><p>This op fails if the target is not a handle to exactly one operation. Otherwise it succeeds.</p><p>This operation does not consume the target handle and produces new handles: it is a navigation op.</p><p>Traits: <code>NavigationTransformOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-57>Attributes: <a class=headline-hash href=#attributes-57>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>ops</code></td><td>::mlir::ArrayAttr</td><td>string array attribute</td></tr><tr><td><code>interface</code></td><td>mlir::transform::MatchInterfaceEnumAttr</td><td><details><summary>An interface to match</summary><p>Enum cases:</p><ul><li>LinalgOp (<code>LinalgOp</code>)</li><li>TilingInterface (<code>TilingInterface</code>)</li><li>LoopLikeInterface (<code>LoopLikeInterface</code>)</li></ul></details></td></tr><tr><td><code>op_attrs</code></td><td>::mlir::DictionaryAttr</td><td>dictionary of named attribute values</td></tr><tr><td><code>filter_result_type</code></td><td>::mlir::TypeAttr</td><td>any type attribute</td></tr><tr><td><code>filter_operand_types</code></td><td>::mlir::ArrayAttr</td><td>type array attribute</td></tr></table><h4 id=operands-95>Operands: <a class=headline-hash href=#operands-95>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-69>Results: <a class=headline-hash href=#results-69>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>results</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredmultitile_sizes-transformmultitilesizesop><code>transform.structured.multitile_sizes</code> (transform::MultiTileSizesOp) <a class=headline-hash href=#transformstructuredmultitile_sizes-transformmultitilesizesop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.multitile_sizes` $target attr-dict `:` custom<MultitileSizesTypes>(type($target), type($low_size), type($high_size), type($split_point)) </code></pre><p>Emits the IR computing the tile sizes <code>s1</code> and <code>s2</code> such that:</p><ul><li>there exists a combination of <code>n</code> tiles of size <code>s1</code> and <code>m</code> tiles of size <code>s2</code> that covers the entirety of the iteration space <code>dimension</code> of the target structured op;</li><li><code>s1</code>, <code>s2</code> is less than or equal to <code>target_size</code>;</li><li><code>s1</code> and <code>s2</code> are divisible by `divisor.</li></ul><p>For example, for a dimension of size 54 with target size 12 and divisor 2, this can emit the IR computing the tile size 10, used for 3 tiles, and 12, used for 2 tiles, totally 10<em>3 + 12</em>2 = 54. Note that when the divisor does not divide the original dimension size, it is impossible to compute such tile sizes. An assertion is emitted to guard against this in the dynamic case.</p><p>Expects the target size and the divisor to be strictly positive. Folds the IR as much as possible, normally obtaining constant sizes and numbers of tiles for a statically known dimension.</p><p>This does <em>not</em> consume the target handle and produces three handles each pointing to single-result index-typed operations (which may be arithmetic constant operations) defining the two respective tile sizes and the product of the first tile size with the number of tiles of that size (useful for splitting the iteration space).</p><p>This operation composes with the regular tiling when applied per-dimension:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl><span class=nv>%sz1</span><span class=p>,</span> <span class=nv>%sz2</span><span class=p>,</span> <span class=nv>%split</span> <span class=p>=</span> structured<span class=p>.</span>multitile_sizes <span class=nv>%target</span> </span></span><span class=line><span class=cl> <span class=p>{</span> <span class=nl>target_size =</span> <span class=m>10</span><span class=p>,</span> <span class=nl>dimension =</span> <span class=m>1</span> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>,</span> <span class=p>!</span>transform<span class=p>.</span>param<span class=p><</span><span class=k>i64</span><span class=p>>,</span> </span></span><span class=line><span class=cl> <span class=p>!</span>transform<span class=p>.</span>param<span class=p><</span><span class=k>i64</span><span class=p>>,</span> <span class=p>!</span>transform<span class=p>.</span>param<span class=p><</span><span class=k>i64</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%handles</span> <span class=p>=</span> structured<span class=p>.</span>split <span class=nv>%target</span> after <span class=nv>%split</span> <span class=p>{</span> <span class=nl>dimension =</span> <span class=m>1</span> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>,</span> <span class=p>!</span>transform<span class=p>.</span>param<span class=p><</span><span class=k>i64</span><span class=p>></span> </span></span><span class=line><span class=cl><span class=nv>%low</span><span class=p>,</span> <span class=nv>%high</span> <span class=p>=</span> transform<span class=p>.</span>split_handle <span class=nv>%handles</span> <span class=p>:</span> <span class=p>(!</span>transform<span class=p>.</span>any_op<span class=p>)</span> </span></span><span class=line><span class=cl> <span class=p>-></span> <span class=p>(!</span>transform<span class=p>.</span>any_op<span class=p>,</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>)</span> </span></span><span class=line><span class=cl><span class=nv>%tiled_low</span><span class=p>,</span> <span class=nv>%loop1</span> <span class=p>=</span> structured<span class=p>.</span>tile_using_for <span class=nv>%low</span> <span class=p>[</span><span class=m>0</span><span class=p>,</span> <span class=nv>%sz1</span><span class=p>]</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>(!</span>transform<span class=p>.</span>any_op<span class=p>,</span> <span class=p>!</span>transform<span class=p>.</span>param<span class=p><</span><span class=k>i64</span><span class=p>>)</span> </span></span><span class=line><span class=cl> <span class=p>-></span> <span class=p>(!</span>transform<span class=p>.</span>any_op<span class=p>,</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>)</span> </span></span><span class=line><span class=cl><span class=nv>%tiled_high</span><span class=p>,</span> <span class=nv>%loop2</span> <span class=p>=</span> structured<span class=p>.</span>tile_using_for <span class=nv>%high</span> <span class=p>[</span><span class=m>0</span><span class=p>,</span> <span class=nv>%sz2</span><span class=p>]</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>(!</span>transform<span class=p>.</span>any_op<span class=p>,</span> <span class=p>!</span>transform<span class=p>.</span>param<span class=p><</span><span class=k>i64</span><span class=p>>)</span> </span></span><span class=line><span class=cl> <span class=p>-></span> <span class=p>(!</span>transform<span class=p>.</span>any_op<span class=p>,</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>)</span> </span></span><span class=line><span class=cl><span class=nv>%common</span> <span class=p>=</span> merge_handles <span class=nv>%tiled_low</span><span class=p>,</span> <span class=nv>%tiled_high</span> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_op </span></span><span class=line><span class=cl> </span></span><span class=line><span class=cl><span class=nv>%sz3</span><span class=p>,</span> <span class=nv>%sz4</span><span class=p>,</span> <span class=nv>%split</span> <span class=p>=</span> structured<span class=p>.</span>multitile_size <span class=nv>%target</span> </span></span><span class=line><span class=cl> <span class=p>{</span> <span class=nl>target_size =</span> <span class=m>42</span><span class=p>,</span> <span class=nl>dimension =</span> <span class=m>0</span> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>,</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>,</span> </span></span><span class=line><span class=cl> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>,</span> <span class=p>!</span>transform<span class=p>.</span>any_op </span></span><span class=line><span class=cl><span class=nv>%sz3r</span><span class=p>,</span> <span class=nv>%sz4r</span><span class=p>,</span> <span class=nv>%splitr</span> <span class=p>=</span> replicate num<span class=p>(</span><span class=nv>%common</span><span class=p>)</span> <span class=nv>%sz3</span><span class=p>,</span> <span class=nv>%sz4</span><span class=p>,</span> <span class=nv>%splitr</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>,</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>,</span> <span class=p>!</span>transform<span class=p>.</span>any_op </span></span><span class=line><span class=cl>structured<span class=p>.</span>split <span class=nv>%common</span> after <span class=nv>%splitr</span> <span class=p>{</span> <span class=nl>dimension =</span> <span class=m>0</span> <span class=p>}</span> </span></span><span class=line><span class=cl> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_op<span class=p>,</span> <span class=p>!</span>transform<span class=p>.</span>any_op </span></span><span class=line><span class=cl><span class=c>// ... </span></span></span></code></pre></div><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-58>Attributes: <a class=headline-hash href=#attributes-58>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>dimension</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>target_size</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>divisor</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr></table><h4 id=operands-96>Operands: <a class=headline-hash href=#operands-96>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-70>Results: <a class=headline-hash href=#results-70>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>low_size</code></td><td>transform any param type or any handle type</td></tr><tr><td style=text-align:center><code>high_size</code></td><td>transform any param type or any handle type</td></tr><tr><td style=text-align:center><code>split_point</code></td><td>transform any param type or any handle type</td></tr></tbody></table><h3 id=transformstructuredpack_greedily-transformpackgreedilyop><code>transform.structured.pack_greedily</code> (transform::PackGreedilyOp) <a class=headline-hash href=#transformstructuredpack_greedily-transformpackgreedilyop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.pack_greedily` $target oilist( `matmul_packed_sizes` `=` custom<DynamicIndexList>($matmul_packed_sizes, $static_matmul_packed_sizes) (`matmul_padded_sizes_next_multiple_of` `=` $matmul_padded_sizes_next_multiple_of^)? `matmul_inner_dims_order` `=` $matmul_inner_dims_order ) attr-dict `:` functional-type(operands, results) </code></pre><p>Target a Linalg op and rewrite it into packed LinalgOp form by trying to infer whether a known suboperation is embedded</p><p>Different packing strategies are applied in order, when one applies successfully, the transform returns:</p><ol><li><p>Matmul packing: Try to infer a matmul operation embedded in the target op. Specifically, this looks for 2 parallel dimensions that participate in an outer-product and 1 reduction dimension. These dimensions are referred as (m, n, k) to match canonical matmul terminology.</p><p>The packed sizes for (m, n, k) are specified by <code>matmul_packed_sizes</code> and the optional <code>matmul_padded_sizes_next_multiple_of</code>. When an entry <code>matmul_packed_sizes[i]</code> is non-0, the corresponding dimension is packed by <code>matmul_packed_sizes[i]</code>. Otherwise, the dimension is merely padded to the next multiple of <code>matmul_padded_sizes_next_multiple_of[i]</code>.</p><p><code>matmul_padded_sizes_next_multiple_of</code> is optional and is expected to either be empty or of size <code>3</code>, matching the size of <code>matmul_packed_sizes</code>. For each individual element of <code>matmul_packed_sizes</code> and <code>matmul_padded_sizes_next_multiple_of</code>, only one of them is allowed to be non-zero.</p><p>The ordering of the packed dimensions (mm, nn, kk) is specified by the <code>matmul_inner_dims_order</code> attribute.</p></li></ol><p>Packing occurs as follows:</p><ol><li>Find the dimensions to pack according to the strategy.</li><li>The target is converted to linalg.generic form.</li><li>An interchange transform is applied to isolate the dimensions to pack as the most minor indexing dimensions of the linalg.generic. The most minor dimensions are themselves ordered according to <code>inner_dims_order</code>.</li><li>An elementwise traversal of <code>matmul_packed_sizes</code> and <code>matmul_padded_sizes_next_multiple_of</code> is performed and for each dimension <code>d</code>, either pack to <code>matmul_packed_sizes[d]</code> or pad to the <code>matmul_padded_sizes_next_multiple_of[d]</code>.</li><li>Packing/padding is performed by the amounts determined in step 4. and following <code>inner_dims_order</code>.</li></ol><p>By normalizing the most minor dimensions to <code>inner_dims_order</code>, the transform guarantees that packing immediately generates inner dimensions in a desirable layout.</p><p>Outer dimension layout permutations are not controlled by this transform op at the moment and can be obtained by composing with the pack_transpose transformation.</p><h4 id=return-modes-53>Return modes <a class=headline-hash href=#return-modes-53>¶</a></h4><p>This operation ignores non-Linalg ops and drops them in the return. It returns the list of packed Linalg ops or the original op when all available packing strategies failed to apply.</p><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-59>Attributes: <a class=headline-hash href=#attributes-59>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>static_matmul_packed_sizes</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute with exactly 3 elements</td></tr><tr><td><code>matmul_padded_sizes_next_multiple_of</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute with 0 or 3 elements</td></tr><tr><td><code>matmul_inner_dims_order</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute with exactly 3 elements</td></tr></table><h4 id=operands-97>Operands: <a class=headline-hash href=#operands-97>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>matmul_packed_sizes</code></td><td>variadic of TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-71>Results: <a class=headline-hash href=#results-71>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>packed_op</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredpack-transformpackop><code>transform.structured.pack</code> (transform::PackOp) <a class=headline-hash href=#transformstructuredpack-transformpackop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.pack` $target `packed_sizes` `=` custom<DynamicIndexList>($packed_sizes, $static_packed_sizes) attr-dict `:` functional-type(operands, results) </code></pre><p>Pack a LinalgOp by applying a data tiling transformation on the op and packing the operands according to the <code>packed_sizes</code> specification.</p><p>Iterator dimensions are tiled in their canonical order in the op spec. Operands are packed according to the same canonical order of the op iterator dimensions.</p><p>Specifying a packed size of 0 for an iterator removes it from consideration for packing.</p><p><code>tensor.pack</code> (resp. <code>tensor.unpack</code>) operations are inserted for the operands (resp. results) that need to be packed (resp. unpacked) according to the <code>packed_sizes</code> specification.</p><h4 id=example-2>Example <a class=headline-hash href=#example-2>¶</a></h4><p>Consider a <code>linalg.matmul</code> with indexing maps:</p><pre tabindex=0><code> // M N K M K // affine_map<(d0, d1, d2) -> (d0, d2)> // K N // affine_map<(d0, d1, d2) -> (d2, d1)> // M N // affine_map<(d0, d1, d2) -> (d0, d1)> %0 = linalg.matmul ins(%A, %B: tensor<?x?xf32>, tensor<?x?xf32>) outs( %C: tensor<?x?xf32>) </code></pre><p>Specifying packed_sizes [2, 3, 4] results in tiling the iterator dimensions M, N and K, in this order, in both the op and its operands.</p><pre tabindex=0><code> // M N K m n k M K m k // affine_map<(d0, d1, d2, d3, d4, d5) -> (d0, d2, d3, d5)> // K N n k // affine_map<(d0, d1, d2, d3, d4, d5) -> (d2, d1, d4, d5)> // M N m n // affine_map<(d0, d1, d2, d3, d4, d5) -> (d0, d1, d3, d4)> %0 = linalg.generic_representing_some_higher_d_matmul ins(%A, %B: tensor<?x?x2x4xf32>, tensor<?x?x4x3xf32>) outs( %C: tensor<?x?x2x3xf32>) </code></pre><p>In particular, note that the second operand <code>B</code> has shape <code>KxNxnxk</code> (and not <code>KxNxkxn</code> as one could expect by looking <strong>only</strong> at the operand).</p><p>Other layouts can be obtained unsurprisingly from this canonical transformation by composing the resulting operation with a <code>transform.structured.pack_transpose</code> op. This composition allows separating concerns and composes better compared to adding additional permutation attributes to this transform op.</p><h4 id=return-modes-54>Return modes <a class=headline-hash href=#return-modes-54>¶</a></h4><p>This operation applies to a single Linalg op, otherwise it fails. This operation may produce a definite failure if the packing fails for any reason.</p><p>The returned handle point to the packed LinalgOp.</p><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-60>Attributes: <a class=headline-hash href=#attributes-60>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>static_packed_sizes</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr></table><h4 id=operands-98>Operands: <a class=headline-hash href=#operands-98>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>packed_sizes</code></td><td>variadic of TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-72>Results: <a class=headline-hash href=#results-72>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>packed_op</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredpack_transpose-transformpacktransposeop><code>transform.structured.pack_transpose</code> (transform::PackTransposeOp) <a class=headline-hash href=#transformstructuredpack_transpose-transformpacktransposeop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.pack_transpose` $target_pack_or_un_pack_op `with_compute_op` `(` $target_linalg_op `)` (`outer_perm` `=` $outer_perm^ )? (`inner_perm` `=` $inner_perm^ )? attr-dict `:` functional-type(operands, results) </code></pre><p>Apply a transposition to a single <code>tensor.pack</code> (resp. <code>tensor.unpack</code>) and update the <code>linalg.generic</code> op that consumes (resp. produces) the operation.</p><p>This transform allows composing a simple <code>structured.pack</code> with additional transpositions to e.g. match the data format required by a specific library call or ISA instruction.</p><p>The transpose spec must specify at least one of <code>outer_perm</code> or <code>inner_perm</code> attributes, which will act upon the <code>outer_dims_perm</code> or <code>inner_dims_pos</code> of the specified <code>tensor.pack</code> or <code>tensor.unpack</code> op.</p><p>If the <code>target</code> of this op is a <code>tensor.pack</code> then a new <code>tensor.empty</code> will be created along with transposed versions of the <code>tensor.pack</code> and the consuming <code>linalg.generic</code>, which is expected to be the sole consumer.</p><p>If the <code>target</code> of this op is a <code>tensor.unpack</code> then the whole pack / compute / unpack chain will be transposed and transposed clones of <code>tensor.pack</code>, the consuming <code>linalg.generic</code> and the tail <code>tensor.pack</code> will be created.</p><h4 id=return-modes-55>Return modes <a class=headline-hash href=#return-modes-55>¶</a></h4><p>This operation targets a single <code>tensor.pack</code> / <code>tensor.unpack</code> op and a single matching <code>linalg.generic</code> that consumes / produces the op. Otherwise, it produces a silenceableFailure.</p><p>This operation may produce a silenceableFailure if the transpose spec is ill-formed (i.e. <code>outer_perm</code> or <code>inner_perm</code> are not permutations of the proper rank) or if the tranposition of all involved operations fails for any reason.</p><p>This operation returns 3 handles, one to the transformed LinalgOp, one to the transformed <code>tensor.pack</code> and one to the transformed <code>tensor.unpack</code>. The last handle for <code>tensor.unpack</code> is empty if <code>target_pack_or_unpack_op</code> was not itself a <code>tensor.unpack</code>.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-61>Attributes: <a class=headline-hash href=#attributes-61>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>outer_perm</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>inner_perm</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr></table><h4 id=operands-99>Operands: <a class=headline-hash href=#operands-99>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target_pack_or_un_pack_op</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>target_linalg_op</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-73>Results: <a class=headline-hash href=#results-73>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>packed_op</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>pack_op</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>un_pack_op</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredpad-transformpadop><code>transform.structured.pad</code> (transform::PadOp) <a class=headline-hash href=#transformstructuredpad-transformpadop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.pad` $target (`pad_to_multiple_of` custom<DynamicIndexList>($pad_to_multiple_of, $static_pad_to_multiple_of)^)? attr-dict `:` functional-type(operands, results) </code></pre><p>Pads the operations pointed to by the target handle using the options provides as operation attributes. The operation returns a handle to the padded operation and to the padding operation (“tensor.pad”).</p><p>To preserve tensor SSA use-def chains, the unpadded result is copied back to the original destination tensor of the targeted op. The op that copies back the result can be customized with <code>copy_back_op</code>:</p><ul><li>“bufferization.materialize_in_destination” (default)</li><li>“linalg.copy”</li><li>“none” (no copy back)</li></ul><h4 id=return-modes-56>Return modes <a class=headline-hash href=#return-modes-56>¶</a></h4><p>This operation ignores non-Linalg ops and drops them in the return. This operation may produce a definite failure if the padding fails for any reason.</p><p>If all the operations referred to by the <code>target</code> handle pad properly, the transform succeeds. Otherwise the transform produces a silenceable failure. The return handle points to only the subset of successfully produced padded operations, which can be empty.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-62>Attributes: <a class=headline-hash href=#attributes-62>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>padding_values</code></td><td>::mlir::ArrayAttr</td><td>array attribute</td></tr><tr><td><code>padding_dimensions</code></td><td>::mlir::ArrayAttr</td><td>64-bit integer array attribute</td></tr><tr><td><code>static_pad_to_multiple_of</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>nofold_flags</code></td><td>::mlir::ArrayAttr</td><td>64-bit integer array attribute</td></tr><tr><td><code>transpose_paddings</code></td><td>::mlir::ArrayAttr</td><td>array of arrays of i64</td></tr><tr><td><code>copy_back_op</code></td><td>::mlir::StringAttr</td><td>string attribute</td></tr></table><h4 id=operands-100>Operands: <a class=headline-hash href=#operands-100>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>pad_to_multiple_of</code></td><td>variadic of transform any param type or any handle type</td></tr></tbody></table><h4 id=results-74>Results: <a class=headline-hash href=#results-74>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>padded</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>pad</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>copy</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredpromote-transformpromoteop><code>transform.structured.promote</code> (transform::PromoteOp) <a class=headline-hash href=#transformstructuredpromote-transformpromoteop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.promote` $target attr-dict `:` custom<SemiFunctionType>(type($target), type($transformed), "false") </code></pre><p>Promotes the specified operands of the target into a separate memory buffer.</p><p>At this point, this transform does not allow customizing alloc/dealloc functions nor the behavior on copy in/out operations.</p><h4 id=return-modes-57>Return modes <a class=headline-hash href=#return-modes-57>¶</a></h4><p>This operation applies to a single Linalg op that satisfies the <code>promoteSubviewsPrecondition</code>, otherwise it fails.</p><p>If the operations referred to by the <code>target</code> handle promote properly, the transform succeeds.</p><p>When successful, the return handle points to the $target operation that was modified inplace.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-63>Attributes: <a class=headline-hash href=#attributes-63>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>operands_to_promote</code></td><td>::mlir::ArrayAttr</td><td>64-bit integer array attribute</td></tr><tr><td><code>use_full_tile_buffers</code></td><td>::mlir::ArrayAttr</td><td>1-bit boolean array attribute</td></tr><tr><td><code>use_full_tiles_by_default</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>use_alloca</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>memory_space</code></td><td>::mlir::Attribute</td><td>any attribute</td></tr><tr><td><code>mapping</code></td><td>::mlir::ArrayAttr</td><td>Device Mapping array attribute</td></tr><tr><td><code>alignment</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr></table><h4 id=operands-101>Operands: <a class=headline-hash href=#operands-101>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-75>Results: <a class=headline-hash href=#results-75>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredreplace-transformreplaceop><code>transform.structured.replace</code> (transform::ReplaceOp) <a class=headline-hash href=#transformstructuredreplace-transformreplaceop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.replace` $target attr-dict-with-keyword regions `:` custom<SemiFunctionType>(type($target), type($replacement), "false") </code></pre><p>Replace all <code>target</code> payload ops with the single op that is contained in this op’s region. All targets must have zero arguments and must be isolated from above.</p><p>This op is for debugging/experiments only.</p><h4 id=return-modes-58>Return modes <a class=headline-hash href=#return-modes-58>¶</a></h4><p>This operation consumes the <code>target</code> handle.</p><p>Traits: <code>HasOnlyGraphRegion</code>, <code>IsolatedFromAbove</code>, <code>NoTerminator</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>SingleBlock</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>RegionKindInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-102>Operands: <a class=headline-hash href=#operands-102>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-76>Results: <a class=headline-hash href=#results-76>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>replacement</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredrewrite_in_destination_passing_style-transformrewriteindestinationpassingstyleop><code>transform.structured.rewrite_in_destination_passing_style</code> (transform::RewriteInDestinationPassingStyleOp) <a class=headline-hash href=#transformstructuredrewrite_in_destination_passing_style-transformrewriteindestinationpassingstyleop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.rewrite_in_destination_passing_style` $target attr-dict `:` functional-type($target, results) </code></pre><p>Rewrite a supported tensor operation that is not in destination-passing style into a form that is in destination-passing style. Currently supported operations are:</p><ul><li>tensor.pad</li><li>tensor.generate</li><li>tensor.from_elements This dichotomy hints at a future interface, for now the implementation just switches between different implementation.</li></ul><h4 id=return-modes-59>Return modes <a class=headline-hash href=#return-modes-59>¶</a></h4><p>This operation ignores non-unsupported ops and drops them from the return. If all the operations referred to by the <code>target</code> handle generalize properly, the transform succeeds. Otherwise the transform produces a silenceable failure. The return handle points to a subset of successfully produced operations:</p><ul><li><code>tensor.pad</code> case, the returned handle points to the tensor.insert_slice.</li><li><code>tensor.generate</code> case, the returned handle points to the linalg.generic.</li><li><code>tensor.from_elements</code> case, the returned handle points to the last <code>tensor.insert</code>.</li></ul><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-103>Operands: <a class=headline-hash href=#operands-103>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-77>Results: <a class=headline-hash href=#results-77>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredscalarize-transformscalarizeop><code>transform.structured.scalarize</code> (transform::ScalarizeOp) <a class=headline-hash href=#transformstructuredscalarize-transformscalarizeop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.scalarize` $target attr-dict `:` custom<SemiFunctionType>(type($target), type($result), "false") </code></pre><p>Indicates that ops of a specific kind in the given function should be scalarized (i.e. their dynamic dimensions tiled by 1).</p><h4 id=return-modes-60>Return modes: <a class=headline-hash href=#return-modes-60>¶</a></h4><p>This operation ignores non-Linalg ops and drops them in the return. This operation produces definite failure if the scalarization fails for any reason. If all the operations referred to by the <code>target</code> handle scalarize properly, the transform succeeds. Otherwise the transform produces a silenceable failure.</p><p>The return handle points to only the subset of successfully produced tiled-by-1 operations, which can be empty.</p><p>This operation does not return handles to the tiled loop. We make this design choice because it is hard to know ahead of time the number of loops that will be produced (it depends on the number of dynamic dimensions after multiple transformations have been applied). Loops can always be recovered by navigating from the tiled operations if needed.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-104>Operands: <a class=headline-hash href=#operands-104>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-78>Results: <a class=headline-hash href=#results-78>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>result</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredspecialize-transformspecializeop><code>transform.structured.specialize</code> (transform::SpecializeOp) <a class=headline-hash href=#transformstructuredspecialize-transformspecializeop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.specialize` $target attr-dict `:` custom<SemiFunctionType>(type($target), type($transformed), "false") </code></pre><p>Transforms a generic operation into the equivalent named form.</p><h4 id=return-modes-61>Return modes <a class=headline-hash href=#return-modes-61>¶</a></h4><p>This operation ignores non-Linalg ops and drops them in the return. If all the operations referred to by the <code>target</code> handle specialize, the transform succeeds; otherwise, the operation produces a silenceable failure. The return handle points to only the subset of successfully produced equivalent named operations, which can be empty or contain the original ops if they were already in named form. The supported specialization to named Linalg operations are:</p><ul><li>linalg.copy of any rank.</li></ul><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-105>Operands: <a class=headline-hash href=#operands-105>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-79>Results: <a class=headline-hash href=#results-79>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredsplit-transformsplitop><code>transform.structured.split</code> (transform::SplitOp) <a class=headline-hash href=#transformstructuredsplit-transformsplitop>¶</a></h3><p>Splits the given <code>target</code> op into two or more complementary parts, which combined cover the entire iteration domain of the original op. The split is performed along the iteration space dimension provided as chunk size attribute specifying the size of the lower part; the remaining range in the iteration space is assigned as the upper part. In case of dimension overflow, the transformation fails. The split is performed at the dimension iterator value specified as either the static chunk size attribute when it is known at transform IR construction time or as the handle to an operation producing a single index-typed value when it is computed by payload IR. In the latter case, the chunk size point must be set to <code>ShapedType::kDynamic</code> and the dynamic size handle must point to as many value-producing operations as there are structured operations pointed to by the target handle.</p><p>The operation consumes the target handle, but preserves the chunk size handle if provided. Without the <code>multiway</code> attribute, it produces a new handle that is a list of the two parts of the structured op after splitting, whose lower index part corresponding to the part with lower iteration space indices.</p><p>Multiway split mode is enabled by specifying the <code>multiway</code> attribute. In this mode a single <code>target</code> op is split into multiple parts covering the iteration space of the specified dimension. <code>static_chunk_sizes</code> and <code>dynamic_chunk_sizes</code> in this case is a list of chunk sizes that the given dimension should be split into. With <code>multiway</code> it also produces a handle; The result handle is a list of the multiple parts of the structured op after splitting, where the target dimensions for each linalg op in the list corresponds to the chunk sizes specfied in the input split list. If the chunk sizes do not cover the entire iteration space, the leftover chunk is the last payload in the result handle.</p><p>As the result handle is most of time a list, an <code>transform.split_handle</code> is needed to access individual handle.</p><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-64>Attributes: <a class=headline-hash href=#attributes-64>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>dimension</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>static_chunk_sizes</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>multiway</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-106>Operands: <a class=headline-hash href=#operands-106>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>dynamic_chunk_sizes</code></td><td>transform any param type or any handle type</td></tr></tbody></table><h4 id=results-80>Results: <a class=headline-hash href=#results-80>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>split_list</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredsplit_reduction-transformsplitreductionop><code>transform.structured.split_reduction</code> (transform::SplitReductionOp) <a class=headline-hash href=#transformstructuredsplit_reduction-transformsplitreductionop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.split_reduction` $target attr-dict `:`functional-type(operands, results) </code></pre><p>Indicates that the given <code>target</code> op should be transformed with the <code>splitReduction</code> transformation and split factor provided as attribute.</p><p>The <code>splitReduction</code> transformation splits the first single linalg op reduction into a parallel and reduction dimension. A new <code>linalg.generic</code> op is created to perform the rest of the reduction.</p><p>The transformation supports different configurations attributes:</p><ul><li>split_factor: the factor by which to split (i.e. the size of the remaining reduction after splitting).</li><li>insert_split_dimension: the dimension in the temporary tensor into which the new parallel dimension is inserted.</li><li>inner_parallel: specifies whether the parallel dimension is before or after the reduction dimension in the splitting op.</li><li>use_scaling_algorithm: whether to use a scaling based formulation that does not create an ExpandShapeOp (default: do not use scaling)</li><li>use_alloc: whether to use an alloc op to allocate the temporary tensor (default: do not use alloc op)</li></ul><h4 id=return-modes-62>Return modes <a class=headline-hash href=#return-modes-62>¶</a></h4><p>This operation ignores non-Linalg ops and drops them in the return. This operation produces a definite failure if the splitting fails for any reason.</p><p>If all the operations referred to by the <code>target</code> handle split properly, the transform succeeds. Otherwise the transform produces a silenceable failure. The 4 returned handles points to only the subset of successfully produced computational operations, which can all be empty. This 4 returned handles point to:</p><ul><li>the init op (or tensor_alloc op if use_alloc = true),</li><li>the fill op used to initialize the neutral element,</li><li>the split op and</li><li>the result-combining op.</li></ul><h4 id=example-default-use_scaling_algorithm--false-use_alloc--false>Example (default: <code>use_scaling_algorithm = false, use_alloc = false</code>): <a class=headline-hash href=#example-default-use_scaling_algorithm--false-use_alloc--false>¶</a></h4><pre tabindex=0><code> %r = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>, affine_map<(d0) -> ()>], iterator_types = ["reduction"]} ins(%in : tensor<32xf32>) outs(%out : tensor<f32>) { ^bb0(%arg1: f32, %arg2: f32): %y = arith.addf %arg1, %arg2 : f32 linalg.yield %y : f32 } -> tensor<f32> </code></pre><p>is split into:</p><pre tabindex=0><code> %cst = arith.constant 0.000000e+00 : f32 %0 = tensor.expand_shape %in [[0, 1]] : tensor<32xf32> into tensor<4x8xf32> %1 = tensor.empty() : tensor<4xf32> %2 = linalg.fill ins(%cst : f32) outs(%1 : tensor<4xf32>) -> tensor<4xf32> %3 = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0)>], iterator_types = ["parallel", "reduction"]} ins(%0 : tensor<4x8xf32>) outs(%2 : tensor<4xf32>) { ^bb0(%arg3: f32, %arg5: f32): %5 = arith.addf %arg3, %arg4 : f32 linalg.yield %5 : f32 } -> tensor<4xf32> %r = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>, affine_map<(d0) -> ()>], iterator_types = ["reduction"]} ins(%3 : tensor<4xf32>) outs(%out : tensor<f32>) { ^bb0(%arg3: f32, %arg4: f32): %5 = arith.addf %arg3, %arg4 : f32 linalg.yield %5 : f32 } -> tensor<f32> </code></pre><h4 id=example-use_scaling_algorithm--true-use_alloc--true>Example (<code>use_scaling_algorithm = true, use_alloc = true</code>): <a class=headline-hash href=#example-use_scaling_algorithm--true-use_alloc--true>¶</a></h4><p>Instead of introducing an ExpandShapeOp, this scaling-based implementation rewrites a reduction dimension <code>k</code> into <code>k * split_factor + kk</code>. The dimension <code>kk</code> is added as an extra parallel dimension to the intermediate output tensor at position <code>insert_split_dimension</code>.</p><p>Consider a minimal example where <code>k</code> is reduced: O(i, j) += I(i, j, k) Assume i=3, j=5, k=128, split_factor=16 and insert_split_dimension=0. The compute is rewritten as: a. O_i(kk, i, j) += I(i, j, 16 * k + kk) b. O(i, j) += O_i(kk, i, j) The intermediate tensor O_i is of shape (128/16)x3x5 == 8x3x5.</p><h4 id=example-3>Example: <a class=headline-hash href=#example-3>¶</a></h4><pre tabindex=0><code> %0 = linalg.matmul ins(%A, %B: tensor<16x256xf32>, tensor<256x32xf32>) outs(%C: tensor<16x32xf32>) -> tensor<16x32xf32> </code></pre><p>Is transformed to:</p><pre tabindex=0><code> #map0 = affine_map<(d0, d1, d2, d3) -> (d0, d2 * 4 + d3)> #map1 = affine_map<(d0, d1, d2, d3) -> (d2 * 4 + d3, d1)> #map2 = affine_map<(d0, d1, d2, d3) -> (d2, d3)> #map3 = affine_map<(d0, d1, d2, d3) -> (d0, d1, d2)> #map4 = affine_map<(d0, d1, d2) -> (d0, d1, d2)> #map5 = affine_map<(d0, d1, d2) -> (d0, d1)> %0 = tensor.empty() : tensor<16x32x64xf32> %cst = arith.constant 0.000000e+00 : f32 %1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<16x32x64xf32>) -> tensor<16x32x64xf32> %2 = tensor.empty() : tensor<64x4xi1> %3 = linalg.generic {indexing_maps = [#map0, #map1, #map2, #map3], iterator_types = ["parallel", "parallel", "parallel", "reduction"]} ins(%A, %B, %2 : tensor<16x256xf32>, tensor<256x32xf32>, tensor<64x4xi1>) outs(%1 : tensor<16x32x64xf32>) { ^bb0(%arg3: f32, %arg4: f32, %arg5: i1, %arg6: f32): %5 = arith.mulf %arg3, %arg4 : f32 %6 = arith.addf %arg6, %5 : f32 linalg.yield %6 : f32 } -> tensor<16x32x64xf32> %4 = linalg.generic {indexing_maps = [#map4, #map5], iterator_types = ["parallel", "parallel", "reduction"]} ins(%3 : tensor<16x32x64xf32>) outs(%C : tensor<16x32xf32>) { ^bb0(%arg3: f32, %arg4: f32): %5 = arith.addf %arg3, %arg4 : f32 linalg.yield %5 : f32 } -> tensor<16x32xf32> return %4 : tensor<16x32xf32> </code></pre><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-65>Attributes: <a class=headline-hash href=#attributes-65>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>split_factor</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>insert_split_dimension</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>inner_parallel</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>use_scaling_algorithm</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>use_alloc</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-107>Operands: <a class=headline-hash href=#operands-107>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-81>Results: <a class=headline-hash href=#results-81>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>init_or_alloc_op</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>fill_op</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>split_linalg_op</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>combining_linalg_op</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredtile_reduction_using_for-transformtilereductionusingforop><code>transform.structured.tile_reduction_using_for</code> (transform::TileReductionUsingForOp) <a class=headline-hash href=#transformstructuredtile_reduction_using_for-transformtilereductionusingforop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.tile_reduction_using_for` $target `by` `tile_sizes` `=` $tile_sizes attr-dict `:` functional-type(operands, results) </code></pre><p>Indicates that the given <code>target</code> op should be transformed with the <code>tileReduction</code> transformation with the tile size provided as attribute.</p><p>This transformation tiles the <code>target</code> along the reduction dimensions. It creates a tensor initialized with the identity value. Then it creates nested loops with a parallel version of <code>target</code> op inside. The parallel op dimensions are less or equal to the tile size passed by user. After the loop a merge operation is created to do a final reduction with the partial reductions. The initial tensor always uses the tile size dimension. This may overallocate if the tile size is greater than the reduction dimension.</p><h4 id=return-modes-63>Return modes <a class=headline-hash href=#return-modes-63>¶</a></h4><p>Returns 4 handles associated with (in order):</p><ul><li>the fill op used to initialize the neutral element,</li><li>the parallel tiled op and</li><li>the result-combining op,</li><li>the parent <code>for</code> op.</li></ul><h4 id=example-4>Example: <a class=headline-hash href=#example-4>¶</a></h4><pre tabindex=0><code> %red = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0)>], iterator_types = ["parallel", "reduction"]} ins(%arg0 : tensor<?x?xf32>) outs(%out : tensor<?xf32>) { ^bb0(%arg7: f32, %arg9: f32): %1 = arith.addf %arg7, %arg9 : f32 linalg.yield %1 : f32 } -> tensor<?xf32> return %red : tensor<?xf32> </code></pre><p>is transformed into:</p><pre tabindex=0><code> %0 = tensor.empty(%dim_1) : tensor<?x5xf32> %1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<?x5xf32>) -> tensor<?x5xf32> %2 = scf.for %arg2 = %c0 to %dim_0 step %c5 iter_args(%arg3 = %1) -> (tensor<?x5xf32>) { %extracted_slice = tensor.extract_slice %1[0, 0] [%dim, 5] [1, 1] : tensor<?x5xf32> to tensor<?x5xf32> %extracted_slice_2 = tensor.extract_slice %arg0[0, %arg2] [%dim, 5] [1, 1] : tensor<?x?xf32> to tensor<?x5xf32> %4 = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0, d1)>], iterator_types = ["parallel", "parallel"]} ins(%extracted_slice_2 : tensor<?x5xf32>) outs(%extracted_slice : tensor<?x5xf32>) { ^bb0(%in: f32, %out: f32): %5 = arith.addf %in, %out : f32 linalg.yield %5 : f32 } -> tensor<?x5xf32> %dim_3 = tensor.dim %1, %c0 : tensor<?x5xf32> %inserted_slice = tensor.insert_slice %4 into %arg3[0, 0] [%dim_3, 5] [1, 1] : tensor<?x5xf32> into tensor<?x5xf32> scf.yield %inserted_slice : tensor<?x5xf32> } %3 = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0)>], iterator_types = ["parallel", "reduction"]} ins(%2 : tensor<?x5xf32>) outs(%arg1 : tensor<?xf32>) { ^bb0(%in: f32, %out: f32): %4 = arith.addf %in, %out : f32 linalg.yield %4 : f32 } -> tensor<?xf32> </code></pre><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-66>Attributes: <a class=headline-hash href=#attributes-66>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>tile_sizes</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr></table><h4 id=operands-108>Operands: <a class=headline-hash href=#operands-108>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-82>Results: <a class=headline-hash href=#results-82>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>fill_op</code></td><td>variadic of TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>split_linalg_op</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>combining_linalg_op</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>for_op</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredtile_reduction_using_forall-transformtilereductionusingforallop><code>transform.structured.tile_reduction_using_forall</code> (transform::TileReductionUsingForallOp) <a class=headline-hash href=#transformstructuredtile_reduction_using_forall-transformtilereductionusingforallop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.tile_reduction_using_forall` $target `by` (`num_threads` `=` $num_threads^)? (`,` `tile_sizes` `=` $tile_sizes^)? (`,` `mapping` `=` $mapping^)? attr-dict `:` functional-type(operands, results) </code></pre><p>Tile a PartialReductionOpInterface op to a tiled <code>scf.forall</code> doing partial reduction.</p><p>This transformation tiles the <code>target</code> along the reduction dimensions. It creates a tensor initialized with the identity value. Then it creates a <code>scf.forall</code> loops with the number threads given by <code>num_threads</code>. The op is tiled op with a size equal to <code>floordiv(size, num_threads)</code>. All the partial reduction value is are parallel inserted to create a new tensor. After the loop a merge operation is created to do a final reduction with the partial reductions tensor. If an extra <code>tile_sizes</code> parameter is passed the tiles are cyclically distributed on the threads of the <code>scf.foralls</code> loop.</p><h4 id=return-modes-64>Return modes <a class=headline-hash href=#return-modes-64>¶</a></h4><p>Returns 4 handles associated with (in order):</p><ul><li>the fill op used to initialize the neutral element,</li><li>the parallel tiled op and</li><li>the result-combining op,</li><li>the parent <code>forall</code> op.</li></ul><h4 id=example-5>Example: <a class=headline-hash href=#example-5>¶</a></h4><pre tabindex=0><code> %red = linalg.generic {indexing_maps = [affine_map<(d0, d1) -> (d0, d1)>, affine_map<(d0, d1) -> (d0)>], iterator_types = ["parallel", "reduction"]} ins(%arg0 : tensor<?x?xf32>) outs(%out : tensor<?xf32>) { ^bb0(%arg7: f32, %arg9: f32): %1 = arith.addf %arg7, %arg9 : f32 linalg.yield %1 : f32 } -> tensor<?xf32> return %red : tensor<?xf32> </code></pre><p>is transformed into:</p><pre tabindex=0><code> %0 = tensor.empty(%dim_1) : tensor<?x5xf32> %1 = linalg.fill ins(%cst : f32) outs(%0 : tensor<?x5xf32>) -> tensor<?x5xf32> %2 = scf.forall (%arg2) in (%c5) shared_outs(%arg3 = %1) -> (tensor<?x5xf32>) { %4 = affine.min #map(%arg2)[%dim_0] %5 = affine.max #map1(%4) %extracted_slice = tensor.extract_slice %arg3[0, %arg2] [%dim, 1] [1, 1] : tensor<?x5xf32> to tensor<?xf32> %6 = affine.apply #map2(%arg2)[%dim_0] %extracted_slice_2 = tensor.extract_slice %arg0[0, %6] [%dim, %5] [1, 1] : tensor<?x?xf32> to tensor<?x?xf32> %extracted_slice_3 = tensor.extract_slice %extracted_slice[0] [%dim] [1] : tensor<?xf32> to tensor<?xf32> %7 = linalg.generic {indexing_maps = [#map3, #map4], iterator_types = ["parallel", "reduction"]} ins(%extracted_slice_2 : tensor<?x?xf32>) outs(%extracted_slice_3 : tensor<?xf32>) { ^bb0(%in: f32, %out: f32): %9 = arith.addf %in, %out : f32 linalg.yield %9 : f32 } -> tensor<?xf32> scf.forall.in_parallel { tensor.parallel_insert_slice %7 into %arg3[0, %arg2] [%dim, 1] [1, 1] : tensor<?xf32> into tensor<?x5xf32> } } {mapping = []} %3 = linalg.generic {indexing_maps = [#map3, #map4], iterator_types = ["parallel", "reduction"]} ins(%2 : tensor<?x5xf32>) outs(%arg1 : tensor<?xf32>) { ^bb0(%in: f32, %out: f32): %4 = arith.addf %in, %out : f32 linalg.yield %4 : f32 } -> tensor<?xf32> </code></pre><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-67>Attributes: <a class=headline-hash href=#attributes-67>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>num_threads</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>tile_sizes</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>mapping</code></td><td>::mlir::ArrayAttr</td><td>Device Mapping array attribute</td></tr></table><h4 id=operands-109>Operands: <a class=headline-hash href=#operands-109>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-83>Results: <a class=headline-hash href=#results-83>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>fill_op</code></td><td>variadic of TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>split_linalg_op</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>combining_linalg_op</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>forall_op</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredtile_using_for-transformtileusingforop><code>transform.structured.tile_using_for</code> (transform::TileUsingForOp) <a class=headline-hash href=#transformstructuredtile_using_for-transformtileusingforop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.tile_using_for` $target `tile_sizes` custom<DynamicIndexList>( $dynamic_sizes, $static_sizes, $scalable_sizes) (`interchange` `=` $interchange^)? attr-dict `:` functional-type(operands, results) </code></pre><p>Indicates that the given <code>target</code> op should be tiled with the given sizes. This transform generates a loop nest with a smaller (“tiled”) target operation in its body. Currently limited to LinalgOps.</p><p>Tile sizes may be known at transformation time, in which case they are expected to be provided in the <code>static_size</code> attribute, or not, in which case the tile value must be computed by the payload IR and the handle to the operation computing it must be provided through <code>dynamic_sizes</code>. When the sizes are not known statically, the corresponding entry in the <code>static_sizes</code> attribute must be set to <code>ShapedType::kDynamic</code>. Only the dynamic sizes must be provided in <code>dynamic_sizes</code>, i.e., there should be as many handles as <code>ShapedType::kDynamic</code> values in the <code>static_sizes</code> attribute. A static size of <code>0</code> indicates that the dimension should not be tiled. No loop will be generated for such dimensions. If all tile sizes are <code>0</code>, this transform is effectively a no-op.</p><p>This op returns handles to the tiled op (in the generated loop nest) and the generated loops. The number of loops is the number of tile sizes that are statically known to be non-zero.</p><h4 id=return-modes-65>Return modes <a class=headline-hash href=#return-modes-65>¶</a></h4><p>On success, the resulting handles are associated with co-indexed lists of tiled operations and loops around them.</p><p>This operation only supports Linalg ops and produces a silenceable failure if the input contains any non-Linalg ops. The ops preceding it in the list associated with the <code>target</code> handle will have been tiled.</p><p>This operation produces a silenceable failure if the <code>dynamic_sizes</code> handles are associated with lists of payload operations of a size different than that of the list associated with the <code>target</code> handle.</p><p>If the internal implementation of tiling for any of the operations fails, produces a definite failure.</p><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-68>Attributes: <a class=headline-hash href=#attributes-68>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>static_sizes</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>interchange</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>scalable_sizes</code></td><td>::mlir::DenseBoolArrayAttr</td><td>i1 dense array attribute</td></tr></table><h4 id=operands-110>Operands: <a class=headline-hash href=#operands-110>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>dynamic_sizes</code></td><td>variadic of transform any param type or any handle type</td></tr></tbody></table><h4 id=results-84>Results: <a class=headline-hash href=#results-84>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tiled_linalg_op</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>loops</code></td><td>variadic of TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredtile_using_forall-transformtileusingforallop><code>transform.structured.tile_using_forall</code> (transform::TileUsingForallOp) <a class=headline-hash href=#transformstructuredtile_using_forall-transformtileusingforallop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.tile_using_forall` $target oilist( `num_threads` custom<PackedOrDynamicIndexList>($packed_num_threads, $num_threads, $static_num_threads) | `tile_sizes` custom<PackedOrDynamicIndexList>($packed_tile_sizes, $tile_sizes, $static_tile_sizes)) (`(` `mapping` `=` $mapping^ `)`)? attr-dict `:` functional-type(operands, results) </code></pre><p>Tile a TilingInterface op to a tiled <code>scf.forall</code>.</p><p>Tiling is applied by either specifying <code>num_threads</code> or <code>tile_size</code>. If <code>num_threads</code> is specified, then the tile size for each dimension <code>i</code> is calculated dynamically via <code>ceilDiv(dimSize[i], num_threads[i])</code>. <code>num_threads</code> and <code>tile_size</code> can be either static index attributes or operation handles (or a mix thereof). Operation handles must be mapped to exactly one op that has exactly one result of index type.</p><p>Static zero tile sizes indicate that the dimension is not tiled and can be thought of as tiling by the full size of data.</p><p>It is the user’s responsibility to ensure that <code>num_threads/tile_sizes</code> is a valid tiling specification (i.e. that only tiles parallel dimensions, e.g. in the Linalg case). If the dimension is not parallelizable, a warning is issued to notify the user that the generated code is not safe to parallelize.</p><p>If non-empty, the <code>mapping</code> is added as an attribute to the resulting <code>scf.forall</code>.</p><p>Note: <code>tile_sizes</code> and <code>num_threads</code> are variadic. Each tile size/number of threads can be an index attribute or a transform handle that is mapped to exactly one payload op with exactly one index result.</p><h4 id=return-modes-66>Return modes <a class=headline-hash href=#return-modes-66>¶</a></h4><p>This operation ignores ops that do not implement the TilingInterface and drops them in the return.</p><p>If all the operations referred to by the <code>target</code> handle tile successfully, the transform succeeds. Otherwise the transform produces a silenceable failure.</p><p>The two returned handles point to only the subset of successfully produced tiled operations, which can all be empty.</p><p>These two returned handles point to:</p><ul><li>the tiled op that implements TilingInterface,</li><li>the new scf.forall op.</li></ul><h4 id=example-using-num_threads>Example using <code>num_threads</code> <a class=headline-hash href=#example-using-num_threads>¶</a></h4><pre tabindex=0><code>%0 = transform.structured.match ops{["linalg.matmul"]} in %arg1 : (!transform.any_op) -> !transform.any_op %3:2 = transform.structured.tile_using_forall %0 num_threads [10, 20] : (!transform.any_op) -> (!transform.any_op, !transform.any_op) </code></pre><h4 id=example-using-tile_sizes>Example using <code>tile_sizes</code> <a class=headline-hash href=#example-using-tile_sizes>¶</a></h4><pre tabindex=0><code>%0 = transform.structured.match ops{["linalg.matmul"]} in %arg1 : (!transform.any_op) -> !transform.any_op %sz = transform.structured.match ... %3:2 = transform.structured.tile_using_forall %0 tile_sizes [0, %sz, 20] : (!transform.any_op, !transform.any_op) -> (!transform.any_op, !transform.any_op) </code></pre><p>Traits: <code>AttrSizedOperandSegments</code>, <code>ReportTrackingListenerFailuresOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-69>Attributes: <a class=headline-hash href=#attributes-69>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>static_num_threads</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>static_tile_sizes</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>mapping</code></td><td>::mlir::ArrayAttr</td><td>Device Mapping array attribute</td></tr></table><h4 id=operands-111>Operands: <a class=headline-hash href=#operands-111>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>num_threads</code></td><td>variadic of transform any param type or any handle type</td></tr><tr><td style=text-align:center><code>tile_sizes</code></td><td>variadic of transform any param type or any handle type</td></tr><tr><td style=text-align:center><code>packed_num_threads</code></td><td>transform any param type or any handle type</td></tr><tr><td style=text-align:center><code>packed_tile_sizes</code></td><td>transform any param type or any handle type</td></tr></tbody></table><h4 id=results-85>Results: <a class=headline-hash href=#results-85>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>tiled_op</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>forall_op</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredtranspose_conv2d-transformtransposeconv2dop><code>transform.structured.transpose_conv2d</code> (transform::TransposeConv2DOp) <a class=headline-hash href=#transformstructuredtranspose_conv2d-transformtransposeconv2dop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.transpose_conv2d` $target attr-dict `:` functional-type($target, results) </code></pre><p>Convert linalg.conv_2d_nhwc_fhwc into linalg.conv_2d_nhwc_hwcf by introducing a linalg.transpose on the filter tensor/memref.</p><p>Whilst the fhwc filter channel ordering can be desirable for certain targets and is a more direct mapping to higher level dialects such as TOSA (which only supports this ordering) hwcf is better suited for transformations such as img2col which can make use of optimized BLAS routines such as GEMM.</p><p>Returns one handle:</p><ul><li>The final operation of the sequence that replaces the original convolution.</li></ul><h4 id=return-modes-67>Return modes: <a class=headline-hash href=#return-modes-67>¶</a></h4><p>Returns a definite failure if target is not isolated from above. Returns a silenceable failure if the pattern application failed.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=operands-112>Operands: <a class=headline-hash href=#operands-112>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-86>Results: <a class=headline-hash href=#results-86>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredtranspose_matmul-transformtransposematmulop><code>transform.structured.transpose_matmul</code> (transform::TransposeMatmulOp) <a class=headline-hash href=#transformstructuredtranspose_matmul-transformtransposematmulop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.transpose_matmul` $target (`<` $inputToTranspose^ `>`)? attr-dict `:` functional-type($target, results) </code></pre><p>Convert Linalg matmul ops to transposed variants.</p><p>By default the LHS matrix is transposed. Specify <code><rhs></code> to instead transpose RHS matrix.</p><h4 id=return-modes-68>Return modes: <a class=headline-hash href=#return-modes-68>¶</a></h4><p>This operation fails if <code>target</code> is unsupported, i.e., not a <code>linalg.matmul</code> or <code>linalg.batch_matmul</code>. Otherwise, the operation succeeds and returns a handle to the transposed matmul op.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-70>Attributes: <a class=headline-hash href=#attributes-70>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>inputToTranspose</code></td><td>mlir::transform::TransposeMatmulInputAttr</td><td><details><summary>Input to transpose when converting matmul ops to transposed variants</summary><p>Enum cases:</p><ul><li>lhs (<code>lhs</code>)</li><li>rhs (<code>rhs</code>)</li></ul></details></td></tr></table><h4 id=operands-113>Operands: <a class=headline-hash href=#operands-113>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-87>Results: <a class=headline-hash href=#results-87>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredvectorize_children_and_apply_patterns-transformvectorizechildrenandapplypatternsop><code>transform.structured.vectorize_children_and_apply_patterns</code> (transform::VectorizeChildrenAndApplyPatternsOp) <a class=headline-hash href=#transformstructuredvectorize_children_and_apply_patterns-transformvectorizechildrenandapplypatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.vectorize_children_and_apply_patterns` $target attr-dict `:`functional-type(operands, results) </code></pre><p>Vectorizes all children contained in the given <code>target</code> using the configuration specified by the attributes of this op. This only vectorizes structured ops that operate on shaped types and does not vectorize loops or straight-line. Internally, it applies a set of rewrite patterns, some of which enable vectorization and some of which clean up the results. Therefore, it can only be applied to an op with the “isolated from above” property. This transformation only fails if the entire pattern rewriting failed, i.e., it does <strong>not</strong> fail when no ops were vectorized.</p><p>Finer granularity can be achieved either with the <code>VectorizeOp</code> for individual ops or by outlining the target part of the payload IR into, e.g., a function, performing this transformation, and inlining it back.</p><p>Note that this transformation invalidates the handles to any payload IR operation that is contained inside the vectorization target.</p><p>This transformation supports the following attributes:</p><ul><li><code>vectorize_padding</code>: a <code>UnitAttr</code> to activate the vectorization of <code>tensor.pad</code> ops. Different pipelines may prefer to lower such ops to loops.</li><li><code>disable_multi_reduction_to_contract_patterns</code>: a <code>UnitAttr</code> to deactivate the rewrite of <code>vector.multi_reduction</code> to <code>vector.contract</code>. This is intended to be used in tests only.</li><li><code>disable_transfer_permutation_map_lowering_patterns</code>: a <code>UnitAttr</code> to deactivate the rewrite of <code>vector.transfer</code> with permutation maps into explicit <code>vector.transpose</code> operations. This is intended to be used in tests only but may be promoted to a first class attribute in the future.</li></ul><h4 id=return-modes-69>Return modes: <a class=headline-hash href=#return-modes-69>¶</a></h4><p>This operation produces a definite failure if vectorization fails for any reason. The operation always returns the handle to the target op that is expected to be isolated from above.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-71>Attributes: <a class=headline-hash href=#attributes-71>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>vectorize_padding</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>vectorize_nd_extract</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>flatten_1d_depthwise_conv</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>disable_multi_reduction_to_contract_patterns</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>disable_transfer_permutation_map_lowering_patterns</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h4 id=operands-114>Operands: <a class=headline-hash href=#operands-114>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-88>Results: <a class=headline-hash href=#results-88>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformstructuredvectorize-transformvectorizeop><code>transform.structured.vectorize</code> (transform::VectorizeOp) <a class=headline-hash href=#transformstructuredvectorize-transformvectorizeop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.vectorize` $target oilist( `vector_sizes` custom<DynamicIndexList>( $vector_sizes, $static_vector_sizes, $scalable_sizes)) attr-dict `:` type($target)(`,`type($vector_sizes)^)? </code></pre><p>Vectorize the target ops, which must be Linalg ops.</p><p>Use the optional vector sizes to specify exactly what configuration the vectorizer should use. It will then use masked vectors of the specified size to enforce this configuration (“masked vectorization”). If no vector sizes are specified, the vectorizer will infer the shapes to use from the target Linalg ops (“regular vectorization”). More specifically:</p><div class=highlight><pre tabindex=0 class=chroma><code class=language-mlir data-lang=mlir><span class=line><span class=cl>transform<span class=p>.</span>structured<span class=p>.</span><span class=kt>vector</span>ize <span class=nv>%target</span> <span class=kt>vector</span>_sizes <span class=p>[</span><span class=m>1</span><span class=p>,</span> <span class=m>4</span><span class=p>]</span> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_op </span></span><span class=line><span class=cl><span class=err>#</span> Regular <span class=kt>vector</span>ization <span class=err>-</span> <span class=kt>vector</span> sizes are inferred from the target Op </span></span><span class=line><span class=cl>transform<span class=p>.</span>structured<span class=p>.</span><span class=kt>vector</span>ize <span class=nv>%target</span> <span class=p>:</span> <span class=p>!</span>transform<span class=p>.</span>any_op </span></span></code></pre></div><p>The vector sizes can be either static or dynamic (SSA values). In case of SSA values, the handle must be mapped to exactly one payload op with exactly one index-typed result.</p><p>Note: The input vector sizes must be bigger than or equal to their counterpart iteration space sizes.</p><p>Typically this operator should be applied to linalg operations that have already been tiled to the appropriate sizes.</p><h4 id=return-modes-70>Return modes: <a class=headline-hash href=#return-modes-70>¶</a></h4><p>This operation produces a silenceable failure if at least one target op is not a Linalg op or fails to vectorize. It produces a definite failure if the dynamic vector sizes (SSA values) do not satisfy the constraints mentioned above.</p><p>Traits: <code>ReportTrackingListenerFailuresOpTrait</code></p><p>Interfaces: <code>MemoryEffectOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-72>Attributes: <a class=headline-hash href=#attributes-72>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>static_vector_sizes</code></td><td>::mlir::DenseI64ArrayAttr</td><td>i64 dense array attribute</td></tr><tr><td><code>vectorize_nd_extract</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr><tr><td><code>scalable_sizes</code></td><td>::mlir::DenseBoolArrayAttr</td><td>i1 dense array attribute</td></tr></table><h4 id=operands-115>Operands: <a class=headline-hash href=#operands-115>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr><tr><td style=text-align:center><code>vector_sizes</code></td><td>variadic of transform any param type or any handle type</td></tr></tbody></table><h3 id=transformstructuredwinograd_conv2d-transformwinogradconv2dop><code>transform.structured.winograd_conv2d</code> (transform::WinogradConv2DOp) <a class=headline-hash href=#transformstructuredwinograd_conv2d-transformwinogradconv2dop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.structured.winograd_conv2d` $target attr-dict `:` functional-type($target, results) </code></pre><p>Winograd Conv2D algorithm will convert linalg Conv2D operation into batched matrix multiply. Before the matrix multiply, it will convert filter and input into a format suitable for batched matrix multiply. After the matrix multiply, it will convert output to the final result tensor.</p><p>The algorithm F(m x m, r x r) is</p><p>Y = A^T x [(G x g x G^T) @ (B^T x d x B)] x A</p><p>The size of output Y is m x m. The size of filter g is r x r. The size of input d is (m + r - 1) x (m + r - 1). A^T, A, G^T, G, B^T, and B are transformation matrices.</p><h4 id=return-modes-71>Return modes: <a class=headline-hash href=#return-modes-71>¶</a></h4><p>This operation produces a silenceable failure if <code>target</code> is unsupported. Otherwise, the operation succeeds and returns a handle of the sequence that replaces the original convolution.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>ReportTrackingListenerFailuresOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-73>Attributes: <a class=headline-hash href=#attributes-73>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>m</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>r</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr></table><h4 id=operands-116>Operands: <a class=headline-hash href=#operands-116>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-89>Results: <a class=headline-hash href=#results-89>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h2 id=tensor-transform-operations>Tensor Transform Operations <a class=headline-hash href=#tensor-transform-operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/Tensor/TransformOps/TensorTransformOps.td>source</a></p><h3 id=transformapply_patternstensordecompose_concat-transformapplydecomposetensorconcatpatternsop><code>transform.apply_patterns.tensor.decompose_concat</code> (transform::ApplyDecomposeTensorConcatPatternsOp) <a class=headline-hash href=#transformapply_patternstensordecompose_concat-transformapplydecomposetensorconcatpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.tensor.decompose_concat` attr-dict </code></pre><p>Indicates that tensor.concat ops should be decomposed into a chain of tensor.insert_slice operations inserting into a materialized destination.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternstensordrop_redundant_insert_slice_rank_expansion-transformapplydropredundantinsertslicerankexpansionpatternsop><code>transform.apply_patterns.tensor.drop_redundant_insert_slice_rank_expansion</code> (transform::ApplyDropRedundantInsertSliceRankExpansionPatternsOp) <a class=headline-hash href=#transformapply_patternstensordrop_redundant_insert_slice_rank_expansion-transformapplydropredundantinsertslicerankexpansionpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.tensor.drop_redundant_insert_slice_rank_expansion` attr-dict </code></pre><p>Indicates that redundant tensor.insert_slice rank reductions should be dropped. E.g., cases where a tensor.extract_slice rank reduction immediately follows an inverse tensor.insert_slice rank expansion.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternstensorfold_into_pack_and_unpack-transformapplyfoldintopackandunpackpatternsop><code>transform.apply_patterns.tensor.fold_into_pack_and_unpack</code> (transform::ApplyFoldIntoPackAndUnpackPatternsOp) <a class=headline-hash href=#transformapply_patternstensorfold_into_pack_and_unpack-transformapplyfoldintopackandunpackpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.tensor.fold_into_pack_and_unpack` attr-dict </code></pre><p>Indicates that operations like tensor.pad and tensor.extract_slice should be folded into tensor.pack and tensor.unpack operations, respectively.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternstensorfold_tensor_empty-transformapplyfoldtensoremptypatternsop><code>transform.apply_patterns.tensor.fold_tensor_empty</code> (transform::ApplyFoldTensorEmptyPatternsOp) <a class=headline-hash href=#transformapply_patternstensorfold_tensor_empty-transformapplyfoldtensoremptypatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.tensor.fold_tensor_empty` attr-dict </code></pre><p>Indicates that tensor.extract_slice and reassociative reshapes should be folded into tensor.empty.</p><p>If <code>fold_single_use_only</code> is set to “true”, only tensor.empty that have a single use are folded.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h4 id=attributes-74>Attributes: <a class=headline-hash href=#attributes-74>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>fold_single_use_only</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr></table><h3 id=transformapply_patternstensorfold_tensor_subset_ops_into_vector_transfers-transformapplyfoldtensorsubsetopsintovectortransferspatternsop><code>transform.apply_patterns.tensor.fold_tensor_subset_ops_into_vector_transfers</code> (transform::ApplyFoldTensorSubsetOpsIntoVectorTransfersPatternsOp) <a class=headline-hash href=#transformapply_patternstensorfold_tensor_subset_ops_into_vector_transfers-transformapplyfoldtensorsubsetopsintovectortransferspatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.tensor.fold_tensor_subset_ops_into_vector_transfers` attr-dict </code></pre><p>Indicates that tensor.extract_slice -> vector.transfer_read and vector.transfer_write -> tensor.insert_slice op chains should be folded into vector tranfer read and write ops</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternstensorfold_tensor_subset_ops-transformapplyfoldtensorsubsetopspatternsop><code>transform.apply_patterns.tensor.fold_tensor_subset_ops</code> (transform::ApplyFoldTensorSubsetOpsPatternsOp) <a class=headline-hash href=#transformapply_patternstensorfold_tensor_subset_ops-transformapplyfoldtensorsubsetopspatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.tensor.fold_tensor_subset_ops` attr-dict </code></pre><p>Indicates that tensor.empty should be folded with tensor.extract_slice, tensor.expand_shape and tensor.collapse_shape.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternstensormerge_consecutive_insert_extract_slice-transformapplymergeconsecutiveinsertextractslicepatternsop><code>transform.apply_patterns.tensor.merge_consecutive_insert_extract_slice</code> (transform::ApplyMergeConsecutiveInsertExtractSlicePatternsOp) <a class=headline-hash href=#transformapply_patternstensormerge_consecutive_insert_extract_slice-transformapplymergeconsecutiveinsertextractslicepatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.tensor.merge_consecutive_insert_extract_slice` attr-dict </code></pre><p>Indicates that consecutive tensor.extract_slice/tensor.insert_slice ops should be merged into a single op. These patterns are not canonicalizations because the bufferization is sensitive to IR structure.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternstensorreassociative_reshape_folding-transformapplyreassociativereshapefoldingpatternsop><code>transform.apply_patterns.tensor.reassociative_reshape_folding</code> (transform::ApplyReassociativeReshapeFoldingPatternsOp) <a class=headline-hash href=#transformapply_patternstensorreassociative_reshape_folding-transformapplyreassociativereshapefoldingpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.tensor.reassociative_reshape_folding` attr-dict </code></pre><p>Indicates that reassociative reshapes (tensor.collapse_shape / tensor.expand_shape) should be folded with inverse rank expansions / rank reductions (via tensor.insert_slice / tensor.extract_slice).</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternstensorrewrite_as_constant-transformapplyrewritetensoropsasconstantpatternsop><code>transform.apply_patterns.tensor.rewrite_as_constant</code> (transform::ApplyRewriteTensorOpsAsConstantPatternsOp) <a class=headline-hash href=#transformapply_patternstensorrewrite_as_constant-transformapplyrewritetensoropsasconstantpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.tensor.rewrite_as_constant` (`aggressive` $aggressive^)? attr-dict </code></pre><p>Indicates that tensor ops (such as tensor.generate) should be replaced with constants (arith.constant) when possible.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h4 id=attributes-75>Attributes: <a class=headline-hash href=#attributes-75>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>aggressive</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h3 id=transformtensormake_loop_independent-transformmakeloopindependentop><code>transform.tensor.make_loop_independent</code> (transform::MakeLoopIndependentOp) <a class=headline-hash href=#transformtensormake_loop_independent-transformmakeloopindependentop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.tensor.make_loop_independent` $target attr-dict `:` functional-type($target, $transformed) </code></pre><p>Rewrite the targeted ops such that their index-typed operands no longer depend on any loop induction variable of the <code>num_loop</code> enclosing <code>scf.for</code> loops. I.e., compute an upper bound that is independent of any such loop IV for every tensor dimension. The transformed op could then be hoisted from the <code>num_loop</code> enclosing loops. To preserve the original semantics, place a <code>tensor.extract_slice</code> inside the loop.</p><p>Currently supported operations are:</p><ul><li>tensor.empty: Replaced with a new tensor.empty with upper bound sizes, followed by a tensor.extract_slice.</li><li>tensor.pad: Replaced by an upper bound padding, followed by a tensor.extract_slice.</li></ul><h4 id=return-modes-72>Return modes <a class=headline-hash href=#return-modes-72>¶</a></h4><p>This operation fails if at least one induction variable could not be eliminated. In case the targeted op is already independent of induction variables, this transform succeeds and returns the unmodified target op.</p><p>Otherwise, the returned handle points to a subset of the produced ops:</p><ul><li>tensor.empty: The returned handle points to the tensor.extract_slice op.</li><li>tensor.pad: The returned handle points to the tensor.extract_slice op.</li></ul><p>This transform op consumes the target handle and produces a result handle.</p><p>Traits: <code>FunctionalStyleTransformOpTrait</code>, <code>TransformEachOpTrait</code></p><p>Interfaces: <code>MemoryEffectsOpInterface</code>, <code>TransformOpInterface</code></p><h4 id=attributes-76>Attributes: <a class=headline-hash href=#attributes-76>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>num_loops</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr></table><h4 id=operands-117>Operands: <a class=headline-hash href=#operands-117>¶</a></h4><table><thead><tr><th style=text-align:center>Operand</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>target</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h4 id=results-90>Results: <a class=headline-hash href=#results-90>¶</a></h4><table><thead><tr><th style=text-align:center>Result</th><th>Description</th></tr></thead><tbody><tr><td style=text-align:center><code>transformed</code></td><td>TransformHandleTypeInterface instance</td></tr></tbody></table><h3 id=transformtype_conversiontensorcast_shape_dynamic_dims-transformtypeconversioncastshapedynamicdimsop><code>transform.type_conversion.tensor.cast_shape_dynamic_dims</code> (transform::TypeConversionCastShapeDynamicDimsOp) <a class=headline-hash href=#transformtype_conversiontensorcast_shape_dynamic_dims-transformtypeconversioncastshapedynamicdimsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.type_conversion.tensor.cast_shape_dynamic_dims` (`ignore_dynamic_info` $ignore_dynamic_info^)? attr-dict </code></pre><p>Populates a type converter with conversion materialization functions that cast a tensor value between two cast-compatible tensors. See <code>tensor.cast</code> for more information on cast compatibility between tensors.</p><p>If <code>ignore_dynamic_info</code> is not set, this will set an additional constraint that source materializations do not cast dynamic dimensions to static ones.</p><p>Interfaces: <code>TypeConverterBuilderOpInterface</code></p><h4 id=attributes-77>Attributes: <a class=headline-hash href=#attributes-77>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>ignore_dynamic_info</code></td><td>::mlir::UnitAttr</td><td>unit attribute</td></tr></table><h2 id=vector-transform-operations>Vector Transform Operations <a class=headline-hash href=#vector-transform-operations>¶</a></h2><p><a href=https://github.com/llvm/llvm-project/blob/main/mlir/include/mlir/Dialect/Vector/TransformOps/VectorTransformOps.td>source</a></p><h3 id=transformapply_patternsvectorcast_away_vector_leading_one_dim-transformapplycastawayvectorleadingonedimpatternsop><code>transform.apply_patterns.vector.cast_away_vector_leading_one_dim</code> (transform::ApplyCastAwayVectorLeadingOneDimPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorcast_away_vector_leading_one_dim-transformapplycastawayvectorleadingonedimpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.cast_away_vector_leading_one_dim` attr-dict </code></pre><p>Collect a set of leading one dimension removal patterns.</p><p>These patterns insert vector.shape_cast to remove leading one dimensions to expose more canonical forms of read/write/insert/extract operations. With them, there are more chances that we can cancel out extract-insert pairs or forward write-read pairs.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectordrop_unit_dims_with_shape_cast-transformapplydropunitdimwithshapecastpatternsop><code>transform.apply_patterns.vector.drop_unit_dims_with_shape_cast</code> (transform::ApplyDropUnitDimWithShapeCastPatternsOp) <a class=headline-hash href=#transformapply_patternsvectordrop_unit_dims_with_shape_cast-transformapplydropunitdimwithshapecastpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.drop_unit_dims_with_shape_cast` attr-dict </code></pre><p>Apply vector patterns to fold unit dims with vector.shape_cast Ops:</p><ul><li>DropUnitDimFromElementwiseOps</li><li>DropUnitDimsFromScfForOp</li><li>DropUnitDimsFromTransposeOp</li></ul><p>Excludes patterns for vector.transfer Ops. This is complemented by shape_cast folding patterns.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorfold_arith_extension-transformapplyfoldarithextensionpatternsop><code>transform.apply_patterns.vector.fold_arith_extension</code> (transform::ApplyFoldArithExtensionPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorfold_arith_extension-transformapplyfoldarithextensionpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.fold_arith_extension` attr-dict </code></pre><p>Collect a set of patterns that fold arithmetic extension on floating point into vector contract for the backends with native support.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorelementwise_to_vector-transformapplyfoldelementwisetovectorpatternsop><code>transform.apply_patterns.vector.elementwise_to_vector</code> (transform::ApplyFoldElementwiseToVectorPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorelementwise_to_vector-transformapplyfoldelementwisetovectorpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.elementwise_to_vector` attr-dict </code></pre><p>Collect a set of patterns that fold elementwise op on vectors to the vector dialect.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorinterleave_to_shuffle-transformapplyinterleavetoshufflepatternsop><code>transform.apply_patterns.vector.interleave_to_shuffle</code> (transform::ApplyInterleaveToShufflePatternsOp) <a class=headline-hash href=#transformapply_patternsvectorinterleave_to_shuffle-transformapplyinterleavetoshufflepatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.interleave_to_shuffle` attr-dict </code></pre><p>Indicates that 1D vector interleave operations should be rewritten as vector shuffle operations.</p><p>This is motivated by some current codegen backends not handling vector interleave operations.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorlower_bitcast-transformapplylowerbitcastpatternsop><code>transform.apply_patterns.vector.lower_bitcast</code> (transform::ApplyLowerBitCastPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorlower_bitcast-transformapplylowerbitcastpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.lower_bitcast` attr-dict </code></pre><p>Indicates that vector bitcast operations should be lowered to finer-grained vector primitives.</p><p>This is usally a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorlower_broadcast-transformapplylowerbroadcastpatternsop><code>transform.apply_patterns.vector.lower_broadcast</code> (transform::ApplyLowerBroadcastPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorlower_broadcast-transformapplylowerbroadcastpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.lower_broadcast` attr-dict </code></pre><p>Indicates that vector broadcast operations should be lowered to finer-grained vector primitives.</p><p>This is usally a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorlower_contraction-transformapplylowercontractionpatternsop><code>transform.apply_patterns.vector.lower_contraction</code> (transform::ApplyLowerContractionPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorlower_contraction-transformapplylowercontractionpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.lower_contraction` (`lowering_strategy` `=` $lowering_strategy^)? attr-dict </code></pre><p>Indicates that vector contraction-like operations should be lowered to finer-grained vector primitives.</p><p>This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h4 id=attributes-78>Attributes: <a class=headline-hash href=#attributes-78>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>lowering_strategy</code></td><td>::mlir::vector::VectorContractLoweringAttr</td><td><details><summary>control the lowering of `vector.contract` operations.</summary><p>Enum cases:</p><ul><li>dot (<code>Dot</code>)</li><li>matmulintrinsics (<code>Matmul</code>)</li><li>outerproduct (<code>OuterProduct</code>)</li><li>parallelarith (<code>ParallelArith</code>)</li></ul></details></td></tr></table><h3 id=transformapply_patternsvectorlower_create_mask-transformapplylowercreatemaskpatternsop><code>transform.apply_patterns.vector.lower_create_mask</code> (transform::ApplyLowerCreateMaskPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorlower_create_mask-transformapplylowercreatemaskpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.lower_create_mask` attr-dict </code></pre><p>Indicates that vector create_mask-like operations should be lowered to finer-grained vector primitives.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorlower_gather-transformapplylowergatherpatternsop><code>transform.apply_patterns.vector.lower_gather</code> (transform::ApplyLowerGatherPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorlower_gather-transformapplylowergatherpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.lower_gather` attr-dict </code></pre><p>Indicates that vector.gather operations should be lowered to finer-grained vector primitives.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorlower_interleave-transformapplylowerinterleavepatternsop><code>transform.apply_patterns.vector.lower_interleave</code> (transform::ApplyLowerInterleavePatternsOp) <a class=headline-hash href=#transformapply_patternsvectorlower_interleave-transformapplylowerinterleavepatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.lower_interleave` attr-dict </code></pre><p>Indicates that vector interleave operations should be lowered to finer-grained vector primitives.</p><p>This is usally a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorlower_masked_transfers-transformapplylowermaskedtransferspatternsop><code>transform.apply_patterns.vector.lower_masked_transfers</code> (transform::ApplyLowerMaskedTransfersPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorlower_masked_transfers-transformapplylowermaskedtransferspatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.lower_masked_transfers` attr-dict </code></pre><p>Apply opt-in patterns that lower vector.mask operations surrounding side-effecting ops:</p><ul><li>MaskedTransferReadOpPattern</li><li>MaskedTransferWriteOpPattern</li><li>MaskedGatherOpPattern</li></ul><p>This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorlower_masks-transformapplylowermaskspatternsop><code>transform.apply_patterns.vector.lower_masks</code> (transform::ApplyLowerMasksPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorlower_masks-transformapplylowermaskspatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.lower_masks` attr-dict </code></pre><p>Indicates that vector.create_mask and vector.constant_mask operations should be lowered to finer-grained vector primitives.</p><p>This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorlower_multi_reduction-transformapplylowermultireductionpatternsop><code>transform.apply_patterns.vector.lower_multi_reduction</code> (transform::ApplyLowerMultiReductionPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorlower_multi_reduction-transformapplylowermultireductionpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.lower_multi_reduction` (`lowering_strategy` `=` $lowering_strategy^)? attr-dict </code></pre><p>Indicates that vector multi_reduction-like operations should be lowered to finer-grained vector primitives.</p><p>This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h4 id=attributes-79>Attributes: <a class=headline-hash href=#attributes-79>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>lowering_strategy</code></td><td>::mlir::vector::VectorMultiReductionLoweringAttr</td><td><details><summary>control the lowering of `vector.multi_reduction`.</summary><p>Enum cases:</p><ul><li>innerparallel (<code>InnerParallel</code>)</li><li>innerreduction (<code>InnerReduction</code>)</li></ul></details></td></tr></table><h3 id=transformapply_patternsvectorlower_outerproduct-transformapplylowerouterproductpatternsop><code>transform.apply_patterns.vector.lower_outerproduct</code> (transform::ApplyLowerOuterProductPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorlower_outerproduct-transformapplylowerouterproductpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.lower_outerproduct` attr-dict </code></pre><p>Indicates that the vector outerproduct operations should be lowered to finer-grained vector primitives.</p><p>This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorlower_scan-transformapplylowerscanpatternsop><code>transform.apply_patterns.vector.lower_scan</code> (transform::ApplyLowerScanPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorlower_scan-transformapplylowerscanpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.lower_scan` attr-dict </code></pre><p>Indicates that vector.scan operations should be lowered to finer-grained vector primitives.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorlower_shape_cast-transformapplylowershapecastpatternsop><code>transform.apply_patterns.vector.lower_shape_cast</code> (transform::ApplyLowerShapeCastPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorlower_shape_cast-transformapplylowershapecastpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.lower_shape_cast` attr-dict </code></pre><p>Indicates that vector shape_cast operations should be lowered to finer-grained vector primitives.</p><p>This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorlower_transfer-transformapplylowertransferpatternsop><code>transform.apply_patterns.vector.lower_transfer</code> (transform::ApplyLowerTransferPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorlower_transfer-transformapplylowertransferpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.lower_transfer` (`max_transfer_rank` `=` $max_transfer_rank^)? attr-dict </code></pre><p>Indicates that vector transfer operations should be lowered to finer-grained vector primitives.</p><p>This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h4 id=attributes-80>Attributes: <a class=headline-hash href=#attributes-80>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>max_transfer_rank</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr></table><h3 id=transformapply_patternsvectorlower_transpose-transformapplylowertransposepatternsop><code>transform.apply_patterns.vector.lower_transpose</code> (transform::ApplyLowerTransposePatternsOp) <a class=headline-hash href=#transformapply_patternsvectorlower_transpose-transformapplylowertransposepatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.lower_transpose` oilist ( `lowering_strategy` `=` $lowering_strategy | `avx2_lowering_strategy` `=` $avx2_lowering_strategy ) attr-dict </code></pre><p>Indicates that vector transpose-like operations should be lowered to finer-grained vector primitives.</p><p>This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h4 id=attributes-81>Attributes: <a class=headline-hash href=#attributes-81>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>lowering_strategy</code></td><td>::mlir::vector::VectorTransposeLoweringAttr</td><td><details><summary>control the lowering of `vector.transpose` operations.</summary><p>Enum cases:</p><ul><li>eltwise (<code>EltWise</code>)</li><li>flat_transpose (<code>Flat</code>)</li><li>shuffle_1d (<code>Shuffle1D</code>)</li><li>shuffle_16x16 (<code>Shuffle16x16</code>)</li></ul></details></td></tr><tr><td><code>avx2_lowering_strategy</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr></table><h3 id=transformapply_patternsvectormaterialize_masks-transformapplymaterializemaskspatternsop><code>transform.apply_patterns.vector.materialize_masks</code> (transform::ApplyMaterializeMasksPatternsOp) <a class=headline-hash href=#transformapply_patternsvectormaterialize_masks-transformapplymaterializemaskspatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.materialize_masks` attr-dict </code></pre><p>Indicates that mask operations should be lowered to fine-grained arithemtic operations.</p><p>This is usually the last step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorrank_reducing_subview_patterns-transformapplyrankreducingsubviewpatternsop><code>transform.apply_patterns.vector.rank_reducing_subview_patterns</code> (transform::ApplyRankReducingSubviewPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorrank_reducing_subview_patterns-transformapplyrankreducingsubviewpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.rank_reducing_subview_patterns` attr-dict </code></pre><p>Apply opt-in vector transfer permutation patterns that include:</p><ul><li>TransferReadDropUnitDimsPattern</li><li>TransferWriteDropUnitDimsPattern</li></ul><p>These patterns have the effect of rewriting a vector.transfer with unit dimensions into a rank-reduced version thanks to subview operations. This is complemented by shape_cast folding patterns.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorrewrite_narrow_types-transformapplyrewritenarrowtypepatternsop><code>transform.apply_patterns.vector.rewrite_narrow_types</code> (transform::ApplyRewriteNarrowTypePatternsOp) <a class=headline-hash href=#transformapply_patternsvectorrewrite_narrow_types-transformapplyrewritenarrowtypepatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.rewrite_narrow_types` attr-dict </code></pre><p>Indicates that vector narrow rewrite operations should be applied.</p><p>This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.</p><p>Warning: these patterns currently only work for little endian targets.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectorsplit_transfer_full_partial-transformapplysplittransferfullpartialpatternsop><code>transform.apply_patterns.vector.split_transfer_full_partial</code> (transform::ApplySplitTransferFullPartialPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorsplit_transfer_full_partial-transformapplysplittransferfullpartialpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.split_transfer_full_partial` (`split_transfer_strategy` `=` $split_transfer_strategy^)? attr-dict </code></pre><p>Indicates that vector transfer operations should be split to full and partial parts.</p><p>This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h4 id=attributes-82>Attributes: <a class=headline-hash href=#attributes-82>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>split_transfer_strategy</code></td><td>::mlir::vector::VectorTransferSplitAttr</td><td><details><summary>control the splitting of `vector.transfer` operations into in-bounds and out-of-bounds variants.</summary><p>Enum cases:</p><ul><li>none (<code>None</code>)</li><li>vector-transfer (<code>VectorTransfer</code>)</li><li>linalg-copy (<code>LinalgCopy</code>)</li><li>force-in-bounds (<code>ForceInBounds</code>)</li></ul></details></td></tr></table><h3 id=transformapply_patternsvectortransfer_permutation_patterns-transformapplytransferpermutationpatternsop><code>transform.apply_patterns.vector.transfer_permutation_patterns</code> (transform::ApplyTransferPermutationPatternsOp) <a class=headline-hash href=#transformapply_patternsvectortransfer_permutation_patterns-transformapplytransferpermutationpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.transfer_permutation_patterns` attr-dict </code></pre><p>Apply opt-in vector transfer permutation patterns that include:</p><ul><li>TransferReadPermutationLowering</li><li>TransferWritePermutationLowering</li><li>TransferOpReduceRank</li><li>TransferWriteNonPermutationLowering</li></ul><p>These patterns have the effect of rewriting a vector.transfer with an arbitrary permutation_map to a vector.transfer with a permutation_map that is a minor identity followed by a vector.transpose.</p><p>In other words, this makes the vector.transfer contiguous on the most minor dimensions and materializes the permutation_map as a vector.transpose.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_patternsvectortransfer_to_scf-transformapplytransfertoscfpatternsop><code>transform.apply_patterns.vector.transfer_to_scf</code> (transform::ApplyTransferToScfPatternsOp) <a class=headline-hash href=#transformapply_patternsvectortransfer_to_scf-transformapplytransfertoscfpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.transfer_to_scf` oilist ( `max_transfer_rank` `=` $max_transfer_rank | `full_unroll` `=` $full_unroll ) attr-dict </code></pre><p>Indicates that vector transfer operations should be rewritten with scf.for loops over finer-grained vector primitives.</p><p>This is usually a late step that is run after bufferization as part of the process of lowering to e.g. LLVM or NVVM.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h4 id=attributes-83>Attributes: <a class=headline-hash href=#attributes-83>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>max_transfer_rank</code></td><td>::mlir::IntegerAttr</td><td>64-bit signless integer attribute</td></tr><tr><td><code>full_unroll</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr></table><h3 id=transformapply_patternsvectorreduction_to_contract-transformapplyvectorreductiontocontractpatternsop><code>transform.apply_patterns.vector.reduction_to_contract</code> (transform::ApplyVectorReductionToContractPatternsOp) <a class=headline-hash href=#transformapply_patternsvectorreduction_to_contract-transformapplyvectorreductiontocontractpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_patterns.vector.reduction_to_contract` attr-dict </code></pre><p>Apply opt-in patterns that convert reductions to contract:</p><ul><li>MultiReduceToContract</li><li>CombineContractBroadcast</li><li>CombineContractABTranspose</li><li>CombineContractResultTranspose</li><li>ReorderElementwiseOpsOnTranspose</li><li>ReorderElementwiseOpsOnBroadcast</li><li>ReorderCastOpsOnBroadcast</li></ul><p>These patterns have the effect of rewriting a vector.multi_reduce into a vector.contract.</p><p>Interfaces: <code>PatternDescriptorOpInterface</code></p><h3 id=transformapply_conversion_patternsvectorvector_to_llvm-transformapplyvectortollvmconversionpatternsop><code>transform.apply_conversion_patterns.vector.vector_to_llvm</code> (transform::ApplyVectorToLLVMConversionPatternsOp) <a class=headline-hash href=#transformapply_conversion_patternsvectorvector_to_llvm-transformapplyvectortollvmconversionpatternsop>¶</a></h3><p>Syntax:</p><pre tabindex=0><code>operation ::= `transform.apply_conversion_patterns.vector.vector_to_llvm` attr-dict </code></pre><p>Collects patterns that convert vector dialect ops to LLVM dialect ops. These patterns require an “LLVMTypeConverter”.</p><p>The patterns can be customized as follows:</p><ul><li><code>reassociate_fp_reductions</code>: Allows LLVM to reassociate floating-point reductions for speed.</li><li><code>force_32bit_vector_indices</code>: Allows the compiler to assume that vector indices fit in 32-bit if that yields faster code.</li></ul><p>Interfaces: <code>ConversionPatternDescriptorOpInterface</code></p><h4 id=attributes-84>Attributes: <a class=headline-hash href=#attributes-84>¶</a></h4><table><tr><th>Attribute</th><th>MLIR Type</th><th>Description</th></tr><tr><td><code>reassociate_fp_reductions</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr><tr><td><code>force_32bit_vector_indices</code></td><td>::mlir::BoolAttr</td><td>bool attribute</td></tr></table><h2 id=transformhandletypeinterface-transformhandletypeinterface>TransformHandleTypeInterface (<code>TransformHandleTypeInterface</code>) <a class=headline-hash href=#transformhandletypeinterface-transformhandletypeinterface>¶</a></h2><p>Types that can be used for the Transform dialect operation handle values. Such types define the properties of Payload IR operations associated with the handle. A user of such a handle can assume that these properties have been verified for any Payload IR operation associated with it.</p><h3 id=methods>Methods: <a class=headline-hash href=#methods>¶</a></h3><h4 id=checkpayload><code>checkPayload</code> <a class=headline-hash href=#checkpayload>¶</a></h4><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>DiagnosedSilenceableFailure</span> <span class=n>checkPayload</span><span class=p>(</span><span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>Location</span> <span class=n>loc</span><span class=p>,</span> <span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>ArrayRef</span><span class=o><::</span><span class=n>mlir</span><span class=o>::</span><span class=n>Operation</span> <span class=o>*></span> <span class=n>payload</span><span class=p>);</span> </span></span></code></pre></div><p>Checks if the given associated objects (Payload IR operations or attributes) satisfy the conditions defined by this type. If not, produces a silenceable error at the specified location.</p><p>NOTE: This method <em>must</em> be implemented by the user.</p><h2 id=transformparamtypeinterface-transformparamtypeinterface>TransformParamTypeInterface (<code>TransformParamTypeInterface</code>) <a class=headline-hash href=#transformparamtypeinterface-transformparamtypeinterface>¶</a></h2><p>Types that can be used for the Transform dialect parameter values. Such types define the structure of the parameters associated with the value, e.g., their underlying type. A user of the value can assume that the parameter has been verified.</p><h3 id=methods-1>Methods: <a class=headline-hash href=#methods-1>¶</a></h3><h4 id=checkpayload-1><code>checkPayload</code> <a class=headline-hash href=#checkpayload-1>¶</a></h4><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>DiagnosedSilenceableFailure</span> <span class=n>checkPayload</span><span class=p>(</span><span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>Location</span> <span class=n>loc</span><span class=p>,</span> <span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>ArrayRef</span><span class=o><::</span><span class=n>mlir</span><span class=o>::</span><span class=n>Attribute</span><span class=o>></span> <span class=n>payload</span><span class=p>);</span> </span></span></code></pre></div><p>Checks if the given associated objects (Payload IR operations or attributes) satisfy the conditions defined by this type. If not, produces a silenceable error at the specified location.</p><p>NOTE: This method <em>must</em> be implemented by the user.</p><h2 id=transformvaluehandletypeinterface-transformvaluehandletypeinterface>TransformValueHandleTypeInterface (<code>TransformValueHandleTypeInterface</code>) <a class=headline-hash href=#transformvaluehandletypeinterface-transformvaluehandletypeinterface>¶</a></h2><p>Types that can be used for the Transform dialect handle values pointing to Payload IR values. Such types define the properties of Payload IR values associated with the handle. Users of such a handle can assume that these properties have been verified for any Payload IR value associated with it.</p><h3 id=methods-2>Methods: <a class=headline-hash href=#methods-2>¶</a></h3><h4 id=checkpayload-2><code>checkPayload</code> <a class=headline-hash href=#checkpayload-2>¶</a></h4><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>DiagnosedSilenceableFailure</span> <span class=n>checkPayload</span><span class=p>(</span><span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>Location</span> <span class=n>loc</span><span class=p>,</span> <span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>ArrayRef</span><span class=o><::</span><span class=n>mlir</span><span class=o>::</span><span class=n>Value</span><span class=o>></span> <span class=n>payload</span><span class=p>);</span> </span></span></code></pre></div><p>Checks if the given associated objects (Payload IR operations or attributes) satisfy the conditions defined by this type. If not, produces a silenceable error at the specified location.</p><p>NOTE: This method <em>must</em> be implemented by the user.</p><h2 id=conversionpatterndescriptoropinterface-conversionpatterndescriptoropinterface>ConversionPatternDescriptorOpInterface (<code>ConversionPatternDescriptorOpInterface</code>) <a class=headline-hash href=#conversionpatterndescriptoropinterface-conversionpatterndescriptoropinterface>¶</a></h2><p>This interface should be implemented by ops that select conversion patterns of a <code>transform.apply_patterns</code> op. It provides a method to populate a rewrite pattern set with conversion patterns.</p><p>Note: Non-conversion rewrite patterns should not be populated with <code>ConversionPatternDescriptorOpInterface</code> because it is not generally safe to use non-conversion rewrite patterns as part of a dialect conversion.</p><h3 id=methods-3>Methods: <a class=headline-hash href=#methods-3>¶</a></h3><h4 id=populatepatterns><code>populatePatterns</code> <a class=headline-hash href=#populatepatterns>¶</a></h4><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=kt>void</span> <span class=nf>populatePatterns</span><span class=p>(</span><span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>TypeConverter</span> <span class=o>&</span><span class=n>typeConverter</span><span class=p>,</span> <span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>RewritePatternSet</span> <span class=o>&</span><span class=n>patterns</span><span class=p>);</span> </span></span></code></pre></div><p>Populate conversion patterns into the given pattern set with the given type converter.</p><p>NOTE: This method <em>must</em> be implemented by the user.</p><h4 id=populateconversiontargetrules><code>populateConversionTargetRules</code> <a class=headline-hash href=#populateconversiontargetrules>¶</a></h4><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=kt>void</span> <span class=nf>populateConversionTargetRules</span><span class=p>(</span><span class=k>const</span> <span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>TypeConverter</span> <span class=o>&</span><span class=n>typeConverter</span><span class=p>,</span> <span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>ConversionTarget</span> <span class=o>&</span><span class=n>conversionTarget</span><span class=p>);</span> </span></span></code></pre></div><p>Populate the ConversionTarget using the final TypeConverter. The default implementation is to do nothing. Overriding this method can be useful in order to setup the ConversionTarget for structural type conversions. In such a situation, an op’s legality depends on using the TypeConverter to determine whether the op’s operand and result types are legal (defined as converting to themselves).</p><p>NOTE: This method <em>must</em> be implemented by the user.</p><h4 id=gettypeconverter><code>getTypeConverter</code> <a class=headline-hash href=#gettypeconverter>¶</a></h4><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=n>std</span><span class=o>::</span><span class=n>unique_ptr</span><span class=o><::</span><span class=n>mlir</span><span class=o>::</span><span class=n>TypeConverter</span><span class=o>></span> <span class=n>getTypeConverter</span><span class=p>();</span> </span></span></code></pre></div><p>Return the type converter to be used with this pattern set. If no type converter is specified, the default type converter of the enclosing “apply_conversion_patterns” op is used.</p><p>NOTE: This method <em>must</em> be implemented by the user.</p><h4 id=verifytypeconverter><code>verifyTypeConverter</code> <a class=headline-hash href=#verifytypeconverter>¶</a></h4><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=o>::</span><span class=n>llvm</span><span class=o>::</span><span class=n>LogicalResult</span> <span class=n>verifyTypeConverter</span><span class=p>(</span><span class=n>TypeConverterBuilderOpInterface</span> <span class=n>builder</span><span class=p>);</span> </span></span></code></pre></div><p>Verify the default type converter that is provided by the enclosing “apply_conversion_patterns” op.</p><p>NOTE: This method <em>must</em> be implemented by the user.</p><h2 id=findpayloadreplacementopinterface-findpayloadreplacementopinterface>FindPayloadReplacementOpInterface (<code>FindPayloadReplacementOpInterface</code>) <a class=headline-hash href=#findpayloadreplacementopinterface-findpayloadreplacementopinterface>¶</a></h2><p>This interface is queried by the <code>TrackingListener</code> and can be implemented by payload ops to indicate that the lookup should be continue with its operands when looking for payload op replacements.</p><p>Example: Consider the case where a tracked “test.foo” payload op is replaced with a new “test.foo” op, but wrapped in a “tensor.reshape” op. In that case, the mapping of the original “test.foo” op should be updated with the new “test.foo” op. A “tensor.reshape” is a metadata-only op that should be skipped when inspecting the replacement values of the original “test.foo” op. More details can be found at <code>TrackingListener</code> documentation.</p><p>Note: Ops that implement <code>CastOpInterface</code> do not need to implement this interface. Such ops are skipped by default. This interface should be implemented by cast-like/metadata-only ops that cannot implement <code>CastOpInterface</code>.</p><h3 id=methods-4>Methods: <a class=headline-hash href=#methods-4>¶</a></h3><h4 id=getnextoperands><code>getNextOperands</code> <a class=headline-hash href=#getnextoperands>¶</a></h4><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=o>::</span><span class=n>llvm</span><span class=o>::</span><span class=n>SmallVector</span><span class=o><::</span><span class=n>mlir</span><span class=o>::</span><span class=n>Value</span><span class=o>></span> <span class=n>getNextOperands</span><span class=p>();</span> </span></span></code></pre></div><p>Return the operands at which the lookup for replacement payload ops should continue.</p><p>NOTE: This method <em>must</em> be implemented by the user.</p><h2 id=patterndescriptoropinterface-patterndescriptoropinterface>PatternDescriptorOpInterface (<code>PatternDescriptorOpInterface</code>) <a class=headline-hash href=#patterndescriptoropinterface-patterndescriptoropinterface>¶</a></h2><p>This interface should be implemented by ops that select rewrite patterns of a <code>transform.apply_patterns</code> op. It provides a method to populate a rewrite pattern set with patterns.</p><p>Note: Conversion patterns are rewrite patterns in MLIR, but they should not be populated with <code>PatternDescriptorOpInterface</code> because they cannot be used in a greedy pattern rewrite.</p><h3 id=methods-5>Methods: <a class=headline-hash href=#methods-5>¶</a></h3><h4 id=populatepatterns-1><code>populatePatterns</code> <a class=headline-hash href=#populatepatterns-1>¶</a></h4><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=kt>void</span> <span class=nf>populatePatterns</span><span class=p>(</span><span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>RewritePatternSet</span> <span class=o>&</span><span class=n>patterns</span><span class=p>);</span> </span></span></code></pre></div><p>Populate rewrite patterns into the given pattern set.</p><p>NOTE: This method <em>must</em> be implemented by the user.</p><h4 id=populatepatternswithstate><code>populatePatternsWithState</code> <a class=headline-hash href=#populatepatternswithstate>¶</a></h4><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=kt>void</span> <span class=nf>populatePatternsWithState</span><span class=p>(</span><span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>RewritePatternSet</span> <span class=o>&</span><span class=n>patterns</span><span class=p>,</span> <span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>transform</span><span class=o>::</span><span class=n>TransformState</span> <span class=o>&</span><span class=n>state</span><span class=p>);</span> </span></span></code></pre></div><p>Populate rewrite patterns into the given pattern set taking into account the transform state.</p><p>NOTE: This method <em>must</em> be implemented by the user.</p><h2 id=transformopinterface-transformopinterface>TransformOpInterface (<code>TransformOpInterface</code>) <a class=headline-hash href=#transformopinterface-transformopinterface>¶</a></h2><p>This interface is to be implemented by operations that identify transformations to be performed on other operations. The former are referred to as transform IR operations. The latter are referred to as payload IR operations. Such transform IR operations provide a fine-grain control mechanism over how transformations are applied by using and defining transform IR values, referred to as handles, that correspond to sets of operations in the payload IR. Transformations are applied starting from the operations identified by handles, but may affect other operations as well. Further restrictions may be imposed by flows that rely on transform IR operations to control transformations.</p><h3 id=methods-6>Methods: <a class=headline-hash href=#methods-6>¶</a></h3><h4 id=apply><code>apply</code> <a class=headline-hash href=#apply>¶</a></h4><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>DiagnosedSilenceableFailure</span> <span class=n>apply</span><span class=p>(</span><span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>transform</span><span class=o>::</span><span class=n>TransformRewriter</span> <span class=o>&</span><span class=n>rewriter</span><span class=p>,</span> <span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>transform</span><span class=o>::</span><span class=n>TransformResults</span> <span class=o>&</span><span class=n>transformResults</span><span class=p>,</span> <span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>transform</span><span class=o>::</span><span class=n>TransformState</span> <span class=o>&</span><span class=n>state</span><span class=p>);</span> </span></span></code></pre></div><p>Applies the transformation represented by the current operation. This accepts as arguments the object that must be populated with results of the current transformation and a transformation state object that can be used for queries, e.g., to obtain the list of operations on which the transformation represented by the current op is targeted. Returns a special status object indicating whether the transformation succeeded or failed, and, if it failed, whether the failure is recoverable or not.</p><p>IR must be created, modified and deleted with the provided rewriter. implementations are responsible for setting the insertion point of the rewriter to the desired location.</p><p>NOTE: This method <em>must</em> be implemented by the user.</p><h4 id=allowsrepeatedhandleoperands><code>allowsRepeatedHandleOperands</code> <a class=headline-hash href=#allowsrepeatedhandleoperands>¶</a></h4><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=kt>bool</span> <span class=nf>allowsRepeatedHandleOperands</span><span class=p>();</span> </span></span></code></pre></div><p>Indicates whether the op instance allows its handle operands to be associated with the same payload operations.</p><p>NOTE: This method <em>must</em> be implemented by the user.</p><h2 id=typeconverterbuilderopinterface-typeconverterbuilderopinterface>TypeConverterBuilderOpInterface (<code>TypeConverterBuilderOpInterface</code>) <a class=headline-hash href=#typeconverterbuilderopinterface-typeconverterbuilderopinterface>¶</a></h2><p>This interface should be implemented by ops that specify a type converter for a dialect conversion, or to populate a type converter with conversions.</p><p>When such ops are intended to be used with “apply_conversion_patterns” or other operations that expect a type converter, a non-default implementation of <code>getTypeConverter</code> should be implemented. For use with “cast_and_call” like ops that construct a type converter iteratively, non-default <code>populateTypeMaterializations</code> should be implemented.</p><h3 id=methods-7>Methods: <a class=headline-hash href=#methods-7>¶</a></h3><h4 id=gettypeconverter-1><code>getTypeConverter</code> <a class=headline-hash href=#gettypeconverter-1>¶</a></h4><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=n>std</span><span class=o>::</span><span class=n>unique_ptr</span><span class=o><::</span><span class=n>mlir</span><span class=o>::</span><span class=n>TypeConverter</span><span class=o>></span> <span class=n>getTypeConverter</span><span class=p>();</span> </span></span></code></pre></div><p>Return the type converter to be used with a dialect conversion.</p><p>NOTE: This method <em>must</em> be implemented by the user.</p><h4 id=gettypeconvertertype><code>getTypeConverterType</code> <a class=headline-hash href=#gettypeconvertertype>¶</a></h4><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=k>static</span> <span class=n>StringRef</span> <span class=nf>getTypeConverterType</span><span class=p>();</span> </span></span></code></pre></div><p>Return the type of type converter that this <code>getTypeConverter</code> returns. This function is used for op verification.</p><p>NOTE: This method <em>must</em> be implemented by the user.</p><h4 id=populatetypematerializations><code>populateTypeMaterializations</code> <a class=headline-hash href=#populatetypematerializations>¶</a></h4><div class=highlight><pre tabindex=0 class=chroma><code class=language-c++ data-lang=c++><span class=line><span class=cl><span class=kt>void</span> <span class=nf>populateTypeMaterializations</span><span class=p>(</span><span class=o>::</span><span class=n>mlir</span><span class=o>::</span><span class=n>TypeConverter</span> <span class=o>&</span><span class=n>converter</span><span class=p>);</span> </span></span></code></pre></div><p>Populate the given type converter with source/target materialization functions.</p><p>NOTE: This method <em>must</em> be implemented by the user.</p><div class=edit-meta><br></div><nav class=pagination><a class="nav nav-prev" href=https://mlir.llvm.org/docs/Dialects/TOSA/ title="Tensor Operator Set Architecture (TOSA) Dialect"><i class="fas fa-arrow-left" aria-hidden=true></i> Prev - Tensor Operator Set Architecture (TOSA) Dialect</a> <a class="nav nav-next" href=https://mlir.llvm.org/docs/Interfaces/ title=Interfaces>Next - Interfaces <i class="fas fa-arrow-right" aria-hidden=true></i></a></nav><footer><p class=powered>Powered by <a href=https://gohugo.io>Hugo</a>. Theme by <a href=https://themes.gohugo.io/hugo-theme-techdoc/>TechDoc</a>. Designed by <a href=https://github.com/thingsym/hugo-theme-techdoc>Thingsym</a>.</p></footer></main><div class=sidebar><nav class=slide-menu><ul><li><a href=https://mlir.llvm.org/>Home</a></li><li><a href=https://mlir.llvm.org/users/>Users of MLIR</a></li><li><a href=https://mlir.llvm.org/pubs/>MLIR Related Publications</a></li><li><a href=https://mlir.llvm.org/talks/>Talks</a></li><li><a href=https://mlir.llvm.org/deprecation/>Deprecations & Current Refactoring</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/getting_started/>Getting Started<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/getting_started/ReportingIssues/>Reporting Issues</a></li><li><a href=https://mlir.llvm.org/getting_started/Debugging/>Debugging Tips</a></li><li><a href=https://mlir.llvm.org/getting_started/Faq/>FAQ</a></li><li><a href=https://mlir.llvm.org/getting_started/Contributing/>How to Contribute</a></li><li><a href=https://mlir.llvm.org/getting_started/DeveloperGuide/>Developer Guide</a></li><li><a href=https://mlir.llvm.org/getting_started/openprojects/>Open Projects</a></li><li><a href=https://mlir.llvm.org/getting_started/Glossary/>Glossary</a></li><li><a href=https://mlir.llvm.org/getting_started/TestingGuide/>Testing Guide</a></li></ul></li><li class="parent has-sub-menu"><a href=https://mlir.llvm.org/docs/>Code Documentation<span class="mark opened">-</span></a><ul class=sub-menu><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Bindings/>Bindings<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Bindings/Python/>MLIR Python Bindings</a></li></ul></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Tools/>Tools<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tools/MLIRLSP/>MLIR : Language Server Protocol</a></li><li><a href=https://mlir.llvm.org/docs/Tools/mlir-reduce/>MLIR Reduce</a></li><li><a href=https://mlir.llvm.org/docs/Tools/mlir-rewrite/>mlir-rewrite</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/QuantPasses/></a></li><li><a href=https://mlir.llvm.org/docs/ActionTracing/>Action: Tracing and Debugging MLIR-based Compilers</a></li><li><a href=https://mlir.llvm.org/docs/BufferDeallocationInternals/>Buffer Deallocation - Internals</a></li><li><a href=https://mlir.llvm.org/docs/Bufferization/>Bufferization</a></li><li><a href=https://mlir.llvm.org/docs/DataLayout/>Data Layout Modeling</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/DefiningDialects/>Defining Dialects<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/DefiningDialects/Constraints/>Constraints</a></li><li><a href=https://mlir.llvm.org/docs/DefiningDialects/AttributesAndTypes/>Defining Dialect Attributes and Types</a></li><li><a href=https://mlir.llvm.org/docs/DefiningDialects/Operations/>Operation Definition Specification (ODS)</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Diagnostics/>Diagnostic Infrastructure</a></li><li><a href=https://mlir.llvm.org/docs/DialectConversion/>Dialect Conversion</a></li><li class="parent has-sub-menu"><a href=https://mlir.llvm.org/docs/Dialects/>Dialects<span class="mark opened">-</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Dialects/DLTITransformOps/></a></li><li><a href=https://mlir.llvm.org/docs/Dialects/OpenACCDialect/>'acc' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Affine/>'affine' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/AMDGPU/>'amdgpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/AMX/>'amx' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArithOps/>'arith' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArmNeon/>'arm_neon' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArmSVE/>'arm_sve' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ArmSME/>'ArmSME' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/AsyncDialect/>'async' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/BufferizationOps/>'bufferization' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ControlFlowDialect/>'cf' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ComplexOps/>'complex' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/DLTIDialect/>'dlti' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/EmitC/>'emitc' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Func/>'func' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/GPU/>'gpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/IndexOps/>'index' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/IRDL/>'irdl' Dialect</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Dialects/Linalg/>'linalg' Dialect<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Dialects/Linalg/OpDSL/>Linalg OpDSL</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Dialects/LLVM/>'llvm' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MathOps/>'math' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MemRef/>'memref' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Mesh/>'mesh' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MLProgramOps/>'ml_program' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MPI/>'mpi' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/NVGPU/>'nvgpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/NVVMDialect/>'nvvm' Dialect</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Dialects/OpenMPDialect/>'omp' Dialect<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Dialects/OpenMPDialect/ODS/>ODS Documentation</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Dialects/PDLInterpOps/>'pdl_interp' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/PDLOps/>'pdl' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/PolynomialDialect/>'polynomial' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/PtrOps/>'ptr' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/QuantDialect/>'quant' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ROCDLDialect/>'rocdl' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/SCFDialect/>'scf' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/ShapeDialect/>'shape' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/SparseTensorOps/>'sparse_tensor' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/TensorOps/>'tensor' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/UBOps/>'ub' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/VCIXDialect/>'vcix' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Vector/>'vector' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/X86Vector/>'x86vector' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/XeGPU/>'xegpu' Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/Builtin/>Builtin Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/MatchOpInterfaces/>OpInterface definitions</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/SPIR-V/>SPIR-V Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Dialects/TOSA/>Tensor Operator Set Architecture (TOSA) Dialect</a></li><li class=active><a href=https://mlir.llvm.org/docs/Dialects/Transform/>Transform Dialect</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Interfaces/>Interfaces</a></li><li><a href=https://mlir.llvm.org/docs/TargetLLVMIR/>LLVM IR Target</a></li><li><a href=https://mlir.llvm.org/docs/BytecodeFormat/>MLIR Bytecode Format</a></li><li><a href=https://mlir.llvm.org/docs/CAPI/>MLIR C API</a></li><li><a href=https://mlir.llvm.org/docs/LangRef/>MLIR Language Reference</a></li><li><a href=https://mlir.llvm.org/docs/ReleaseNotes/>MLIR Release Notes</a></li><li><a href=https://mlir.llvm.org/docs/Canonicalization/>Operation Canonicalization</a></li><li><a href=https://mlir.llvm.org/docs/OwnershipBasedBufferDeallocation/>Ownership-based Buffer Deallocation</a></li><li><a href=https://mlir.llvm.org/docs/PassManagement/>Pass Infrastructure</a></li><li><a href=https://mlir.llvm.org/docs/Passes/>Passes</a></li><li><a href=https://mlir.llvm.org/docs/PatternRewriter/>Pattern Rewriting : Generic DAG-to-DAG Rewriting</a></li><li><a href=https://mlir.llvm.org/docs/PDLL/>PDLL - PDL Language</a></li><li><a href=https://mlir.llvm.org/docs/Quantization/>Quantization</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Rationale/>Rationale<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Rationale/RationaleGenericDAGRewriter/>Generic DAG Rewriter Infrastructure Rationale</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/RationaleLinalgDialect/>Linalg Dialect Rationale: The Case For Compiler-Friendly Custom Operations</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/Rationale/>MLIR Rationale</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/MLIRForGraphAlgorithms/>MLIR: Incremental Application to Graph Algorithms in ML Frameworks</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/RationaleSimplifiedPolyhedralForm/>MLIR: The case for a simplified polyhedral form</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/SideEffectsAndSpeculation/>Side Effects & Speculation</a></li><li><a href=https://mlir.llvm.org/docs/Rationale/UsageOfConst/>Usage of 'const' in MLIR, for core IR types</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/ShapeInference/>Shape Inference</a></li><li><a href=https://mlir.llvm.org/docs/SPIRVToLLVMDialectConversion/>SPIR-V Dialect to LLVM Dialect conversion manual</a></li><li><a href=https://mlir.llvm.org/docs/SymbolsAndSymbolTables/>Symbols and Symbol Tables</a></li><li><a href=https://mlir.llvm.org/docs/DeclarativeRewrites/>Table-driven Declarative Rewrite Rule (DRR)</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Traits/>Traits<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Traits/Broadcastable/>The `Broadcastable` Trait</a></li></ul></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Tutorials/>Tutorials<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tutorials/CreatingADialect/>Creating a Dialect</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/QuickstartRewrites/>Quickstart tutorial to adding MLIR graph rewrite</a></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Tutorials/Toy/>Toy Tutorial<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-1/>Chapter 1: Toy Language and AST</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-2/>Chapter 2: Emitting Basic MLIR</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-3/>Chapter 3: High-level Language-Specific Analysis and Transformation</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-4/>Chapter 4: Enabling Generic Transformation with Interfaces</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-5/>Chapter 5: Partial Lowering to Lower-Level Dialects for Optimization</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-6/>Chapter 6: Lowering to LLVM and CodeGeneration</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/Toy/Ch-7/>Chapter 7: Adding a Composite Type to Toy</a></li></ul></li><li class=has-sub-menu><a href=https://mlir.llvm.org/docs/Tutorials/transform/>Transform Dialect Tutorial<span class="mark closed">+</span></a><ul class=sub-menu><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch0/>Chapter 0: A Primer on “Structured” Linalg Operations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch1/>Chapter 1: Combining Existing Transformations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch2/>Chapter 2: Adding a Simple New Transformation Operation</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch3/>Chapter 3: More than Simple Transform Operations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/Ch4/>Chapter 4: Matching Payload with Transform Operations</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/transform/ChH/>Chapter H: Reproducing Halide Schedule</a></li></ul></li><li><a href=https://mlir.llvm.org/docs/Tutorials/UnderstandingTheIRStructure/>Understanding the IR Structure</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/MlirOpt/>Using `mlir-opt`</a></li><li><a href=https://mlir.llvm.org/docs/Tutorials/DataFlowAnalysis/>Writing DataFlow Analyses in MLIR</a></li></ul></li></ul></li></ul></nav><div class=sidebar-footer></div></div></div><a href=# id=backtothetop-fixed class=backtothetop data-backtothetop-duration=600 data-backtothetop-easing=easeOutQuart data-backtothetop-fixed-fadein=1000 data-backtothetop-fixed-fadeout=1000 data-backtothetop-fixed-bottom=10 data-backtothetop-fixed-right=20><span class="fa-layers fa-fw"><i class="fas fa-circle"></i> <i class="fas fa-arrow-circle-up"></i></span></a></div></body></html>