What is the strategy for tensor->memref conversion? (bufferization)

I’d like to move to a model where each dialect is “self-sufficient” w.r.t. bufferization. I was referring to e.g. ⚙ D88083 [mlir] Add file to implement bufferization for shape ops. which will be moved to Dialect/Shape/Transforms/Bufferize.cpp

Interesting idea. I’ll look into it.