[RFC] Split the memref `dialect` from `std`

This is part of splitting the std dialect.

Dialect Scope and Goal

The memref dialect is intended to hold bufferization/memref-specific operations.
The following ops will be moved to MemRefOps.td:

std.alloc → memref.alloc
std.alloca → memref.alloca
std.dealloc → memref.dealloc
std.get_global_memref → memref.get_global
std.global_memref → memref.init_global
std.load → memref.load
std.memref_cast → memref.cast
std.memref_reinterpret_cast → memref.reinterpret_cast
std.memref_reshape → memref.reshape
std.prefetch → memref.prefetch
std.store → memref.store
std.subview → memref.subview
std.transpose → memref.transpose
std.view → memref.view

Further additions to the memref dialect could be:
memref.clone
std.tensor_to_memref → memref.bufferize_cast

Notable non-inclusions (at least initially)
Already mentioned in [RFC] split the `tensor` dialect from `std`
std.dim → split to memref/tensor.dim
std.rank → split to memref/tensor.rank
std.tensor_load and std.tensor_store need a bridging dialect

Development process
The split will be done in two steps:
During the initial split we create a memref dialect and rename/migrate the corresponding ops from std.
After the initial split, we expect to get back to the ops listed in “Notable non-inclusions”.

4 Likes

LGTM! let’s do this!

Would suggest just memref.global.

You will probably run into the issue I mentioned here: [RFC] split the `tensor` dialect from `std` - #16 by _sean_silva

Nice, thanks for tackling this!

I would suggest to formulate the scope of the dialect without mentioning bufferization, there are flows that start at memref level and it doesn’t make sense to require them to reason about it if they need new ops in this dialect.

Thanks for doing this! LGTM as well.
(and +1 for Alex and Sean’s suggestions in general)

This is a weird name. Other possibilities:
memref.cast, memref.buffer_cast, …

I don’t see tensor.dim for now. It seems to me that we can have 3 alternative solutions:
1, add a tensor.dim, is it on the road?
2, shape.GetShape + shape.GetExtent as tensor.dim
3, just use memref.dim ( it’s counterintuitive but this op can still accept tensor for now…)

Which way is the expected way in the longer term?

I think there is a desire for #1, but it is a moderately invasive change, which is why we persist with #3. I’d be -1 on #2: it should be possible to express these forms without involving the shape dialect (and shape will often lower to a dim).

Quick update: https://reviews.llvm.org/D105165 splits memref.dim into two ops (#1) and is out for review.

2 Likes

Thank you @matthias-springer for doing the split!

Thank you!!!