Following the discussions of the last days, and after wrestling with various bufferization options for about a week, I was wondering if you have defined, and possibly also written down, the calling conventions governing bufferization.
As I see it now, there are two conventions already used in practice by various bufferization options when a function outputs a tensor:
- The output tensor becomes an output memref. This seems to be the default option. The implicit assumption seems to be that the callee allocates it.
- The output tensor becomes an input memref. This effect is attained using option
--buffer-results-to-out-params. The implicit assumption is that the caller allocates the buffer.
However, what I just wrote cannot allow (by itself) the compilation of a tensor-based code in a way that excludes memory leaks and memory errors. For rule 1, what seems to be a minimum extra rules is:
- the callee must itself allocate the output memref, not just send back the pointer to an input memref (e.g. through a view).
- the caller must deallocate the memref itself or must output it to a higher level.
For rule 2, things seem a bit simpler. It’s just that the callee should never deallocate memrefs received as input.
These extra assumptions allow, for instance, the automatic synthesis of
dealloc operations for memrefs allocated during a function call.
With respect to these considerations, my main question is: do you have a written document covering the current development on bufferization? I could help with reviewing it, and possibly add a few words on the way synchronous languages handle this (in a completely different way).