[RFC] Extend tensor::FromElementsOp to N-d

[RFC] Extend tensor::FromElementsOp to N-dims.

Background

FromElementsOp operation allows to create only 1D tensors from a range of same-type arguments.

tensor.from_elements i_1, ..., i_N : tensor<Nxindex>

Unfortunately, this leads to a quite common pattern when a newly created tensor is reshaped into the required type.

%tensor = tensor.from_elements %c100500 : tensor<1xi32>
%reshaped = linalg.tensor_collapse_shape %tensor []
    : tensor<1xi32> into tensor<i32>

The reshaping operation here belongs to linalg dialect, but it can also be done with tensor.reshape and, probably, some other operations.

When lowered to LLVM, the reshape operation will result in an unnecessary construction of an additional memref descriptor. This can be avoided if tensor.from_elements is extended to support any statically-shaped output type.

Proposal

Update FromElementsOp to allow result tensor type to have any rank. The verifier will check that the number of the provided elements is equal to the elements count of the result type.

tensor.from_elements i_1, ..., i_N
  : tensor<d_1 x ... x d_M x index>

so that N == d_1 * ... * d_M.

Seems like a limitation in the lowering or the optimization pipeline to me: it isn’t clear to me in what way what you’re describing is something fundamental instead?
I’m wary that we would need to create “fused” version of every possible code pattern to work around limitations of the optimizers, such strategy wouldn’t be scalable.

Other than that, the extension you’re proposing seems reasonable to me!

2 Likes

I’m guessing there is a fixed layout implicit here too?

And then you’d want to be able to write

%tensor = tensor.from_elements %c100500 : tensor<i32>

?

Came here to say basically the same thing as Mehdi. Extending tensor.from_elements seems reasonable, but the fact that the backends can’t handle from_elements + reshape suggests a problem at that level to me :slight_smile:

1 Like

There is no real reason why tensor.from_elements can only do 1d. It is mostly an artifact of what was needed at the time and not implementing more. If there are uses where small n-d tensors are needed, creating the ravel and then reshaping seems unnecessarily clunky. So I am for extending it.

Also, it would align it better with tensor.generate that is n-dimensional.

I agree with @herhut . I don’t see a fundamental reason why the op is representationally limited in this way.

Mehdi’s point does resonate with me, so I’d be a bit wary to key off of this to do special optimization patterns – if it falls out of the natural lowering then great (I think it will), but I don’t think we should go out of our way to use this as a crutch for inefficient handling of reshaping.

I suspect that there are larger design issues in code that is really heavily reliant on saving a memref descriptor tensor<i32> generated this way. It might benefit from a “detensorization” type of transformation.

2 Likes

Yes! And not only that.

I thought that tensors didn’t really have layouts. But yes, after bufferization it will become a memref with an identity layout.

Actually, the pipelines can deal with this pattern easily. One of the examples is the ExtractFromReshapeFromElements pattern in Linalg/Transforms/Detensorize.cpp. I just think that it does not make a lot of sense to restrict FromElementsOp to 1D and, as @_sean_silva wrote, the optimization “falls out of the natural lowering”.

Implemented in âš™ D115821 [mlir] Extend `tensor.from_elements` to support N-D case.