The 0-D vector vs scalar case has been showing up in a bunch of corner cases recently.
The latest is that
linalg.dot(tensor<?xf32>, tensor<?xf32>) -> tensor<f32> cannot vectorize on tensors due to mistmatches between
There are a bunch of tradeoffs involved that I don’t think are worth unpacking just yet if there is a simple consensus that can emerge.
TL;DR, I am contemplating allowing
T of type float or integer (and maybe index once the DataLayout representation for index is a little more fleshed out but that’s beside the point).
vector<T> would canonicalize to
T where it makes sense.
Before/at the point of lowering to the LLVM dialect, all
vector<T> would have become
Have people thought about this corner case and have formed opinions on the topic?