LLVM Discussion Forums

Functions that can read vectors of variable size?

Hello, I’d like to have a single external function that reads all my vectors, even though inside my code all memref sizes are fixed. The code I’d like to write is:

func @read_vec(i64,i64) -> (memref<?x?xf64>)

func @main() {
%d0 = constant 10 : i64
%d1 = constant 20 : i64
%A = call @read_vec (%d0,%d1) : (i64,i64) -> (memref<10x20xf64>)

I thought this is covered by the shape inference, but the fragment is rejected by mlir-opt. The error happens at load time, it’s not a missing pass.

How can I do what I want?

You can’t. Types are expected to match exactly, it’s an IR, not a programming language after all. You can, however, cast a dynamically-sized memref to a statically-sized one.

%0 = call @read_vec(...) : (i64, i64) -> memref<?x?xf64>
%1 = memref_cast %0 : memref<?x?xf64> to memref<10x20xf64>

Shape inference is work-in-progress for tensor operations in +/-ML graph world, it does not affect core IR rules.

1 Like

You could do this with your own dialect call operation (TensorFlow supports this for example), you can’t use the std.call operation because it ensure strict conformance between function signature and the std.call arguments/return types.

1 Like