I recently hit an issue w.r.t to `tensor_to_memref`

that triggered a lot of discussion internally. It might be worth just recording the discussion here for everyone’s visibility.

The canonicalizations of `LoadOp`

here folds away `LoadOp (TensorToMemrefOp $value), $indices))`

to `TensorExtractOp $value $indices`

. Specifically the following sequence of instructions

```
%1 = tensor_to_memref %0 : tensor<f32>
store %val, %1[] : f32, tensor<f32>
%get_val = load %1[] : tensor<f32>
```

will get canonicalized away to

```
%1 = tensor_to_memref %0 : tensor<f32>
store %val, %1[] : f32, tensor<f32>
%get_val = tensor.extract %0[] : tensor<f32>
```

`%get_val`

is getting the wrong value, i.e. you would expect it to get `%val`

and it doesnt.

One way to interpret this is that the canonicalization is wrong. It is undoing bufferization and the pattern is not considering the intermediate operations that might be updating the buffer leading to incorrect behavior. Therefore it needs to be removed (patch).

An alternative way to reason about it is that the input IR is exhibiting undefined behavior. The semantics of `tensor_to_memref`

is that it is illegal to write to the `memref`

that is the result of a `tensor_to_memref`

. The instructions is meant only for use for necessary type-legalizations when going from tensors to buffers, and that any use of it outside of that is inherently undefined behavior.

- The caveat here though is that there is no way today to enforce that a memref has to be read only. So for a given memref it is impossible to reason about locally if the memref came from a
`tensor_to_memref`

and therefore cannot be written to. - There is not too much difference between a read-only
`memref`

and a`tensor`

. So there should not be a`tensor_to_memref`

instruction at all in the first-place. This would also imply that any bufferization pass (or sequence of passes) is not expected to convert all tensor operations to operations on`memref`

s and that all backends (LLVM, and SPIR-V) must be able to handle tensors that are not converted during bufferization. This typically means that backends need to handle`std.constant`

natively.- The LLVM backend does not handle any tensor operations. Instead there is a pass that converts
`std.constant`

s to global`memref`

objects which can be used to lower`std.constant`

s to LLVM. - The SPIR-V backends natively handles
`std.constant`

s.

- The LLVM backend does not handle any tensor operations. Instead there is a pass that converts