Sure, what we are currently using Tablengen source and target DAGs:
def : Pat<(X_Op Tensor:$x, $y), (Y_Op $y, (Y_Const 20), $x)>;
(just written, not tested, possibly doesn’t parse )
But we’ve always wanted to go beyond that. Especially with the work that was done for dynamic patterns last year, it seems like expressing these rewrites in MLIR itself could be more concise, easier to read (folks may disagree ;-)), dynamically added etc. For example, we could have something like (no attempt to make pretty or consider caveats)
^bb0(%za : tensor<?xi32>, %zb : tensor<?xi32>):
%x = x.add %za, %%zb : tensor<?xi32>
return %x : tensor<?xi32>
%a = y.blah %za : tensor<?xi32>
%b = y.foo %zb, %a : tensor<?xi32>
return %b : tensor<?xi32>
instead (where we have equality between source and target pattern under constraints). And so the rewrite would be expressed as source/target MLIR “snippets”. There is a lot of space in this design scope though. But think about DRR as a frontend that generates C++ today, maybe the dynamic rewrite patterns in future - but the rewrite dialect is too low level for humans to write in general, but we do want to have rewrites specified at a higher level. One of the options is to express them in MLIR directly.
There are many open questions (e.g., syntax obviously, type/shape polymorphism, constraints, benefit computations, etc.). I think looking at the current DRR, LLVM’s GlobalIsel and other pattern description languages would be useful to look at.