Sorry for the delay, I had not seen this when originally posted.
TL;DR is that there is no progress in core and no progress planned in the short-term that I know of.
As far as traditional ML operations are concerned, the topic has also been discussed here: Lowering optional attributes in Linalg StructuredOps to Standard dialect.
The most important issue is the ABI one: the C++ and MLIR representation of the data type have to agree in consistent ways in all cases. This is one of the main reasons the MemRefDescriptor exists.
The second big topic is MLIR attribute <-> MLIR struct <-> C++ struct.
MLIR core does not yet provide any support for this; we are starting to touch IDL land. The project that does this consistently and that I am aware of is this one. Maybe at some point, @whchung and colleagues should present their work at an ODM and consider upstreaming some of their project.
Linalg has a simple rewrite pattern to lower an op to a library call, it does not support attributes atm. For the specific question of function name, there is generally a name mangling procedure. For Linalg it is here.
Regarding examples, here is one non-trivial interop example that prints a memref of 2-D vector using the
print_memref_vector_4x4xf32 whose definition lives here. You can see how at runtime this uses the option
-shared-libs=%linalg_test_lib_dir/libmlir_runner_utils%shlibext and do some variation of
dlopen: you can just reuse one of the
mlir-xxx-runner binaries or build your own.
This is a just simple example to test the ABI and “C++ calling MLIR calling C++” works.
You will likely want to spell the name mangling, C++ shim that connects to your lib implementation and ABI (e.g. unranked memref): this all depends where you want to put the switches that inject static information (e.g. fixed size along some dimension, data type, rank, others).
There are many ways to evolve all of this and spell it out properly in an extensible fashion that will also be more mindful of future data types etc.
However, this is quite low-priority for us in the grander codegen vision (i.e. for the few things we will really need it will be easy to build one-off solutions with small variations on top of the existing mechanisms).