LLVM Discussion Forums

Work in progress: next MLIR News, 16th edition (9/18/2020)

Work in progress: this is a wiki post, everyone is welcome to modify it directly

Please update with work done between 9/7 and 9/21: you can update it along the way (don’t wait the end date to add entries here: you can add as the work is landing)

See the previous published edition.

Welcome to the sixteenth issue of the MLIR (bi)Weekly, a newsletter (published on Friday) covering developments in MLIR, and related projects in the ecosystem. MLIR (bi)Weekly is brought to you by a collective effort of contributors, we welcome your contributions!

Highlights

MLIR Core

  • A small tutorial page was written to help understanding better the C++ class associated to the IR structure, and how to traverse it.
  • The global dialect registry will be removed in the next two weeks, please update if you haven’t already!

Infrastructure

Table-driven Infrastructure

  • TableGen now emits directly the namespace nesting in the generated file, and fully qualify references to symbols with the entire namespace path. This makes it more robust overall to ambiguous name resolution.

Shape Dialect

  • We have added buffer allocation support for shape.assuming, which was required for the kernel generator project (TensorFlow).
  • shape.shape_of now lowers via the newly added dynamic_tensor_from_elements, avoiding stack allocated memrefs while the IR is still at tensor level.
  • We added shape.cstr_require to model arbitrary constraints that are expressed outside of the shape dialect.
  • A first pass to lower shape.cstr_* operations to side-effecting assert operations is underway. This will allow us to actually check shape related constraints in generated code.

Optimizations and Code Generation

CPU codegen

  • Now that matvec runs correctly in XLA:CPU, focus has shifted to performance. Currently better than the hand-written emitter for AVX2, still investigating regressions for customers that cannot use AVX.
  • After some discussion we decided to enable optimizations that assume 32-bit indices by default for the Vector dialect, since this yields the best performance (it is unclear if vectors really need full 64-bit index space; clients can still opt-out though)
  • Ongoing brainstorming and prototyping with sparse tensor lowering

SPIR-V

  • SPIR-V target environment resource limits are enhanced to include more fields like subgroup size, max shared memory size, vendor/product id, etc.
  • Recursive struct support is coming to SPIR-V

Other

  • Unification work on Linalg tensors / buffers has started.

Recent Publications

The Hardware Lottery

I think that mlir-npcomp link/url is now at https://github.com/llvm/mlir-npcomp/

Thanks I updated it!

(it is a wiki, feel free to edit whenever you see something)