LLVM Discussion Forums

MLIR News, 1st edition (2/21/2020)

Welcome to the first issue of the MLIR (bi)Weekly, a newsletter (published on Friday) covering developments in MLIR, and related projects in the ecosystem. MLIR (bi)Weekly is brought to you by a collective effort of contributors, we welcome your contributions!

Highlights

  • The mlir-vulkan-runner has landed! This allows to execute MLIR snippets on actual Vulkan devices!
    Now we have a way to perform integration tests for the SPIR-V/Vulkan CodeGen path from higher level abstractions like other runners.

  • We have a new way of customizing the syntax of operation directly in ODS, the Toy tutorial was updated to discuss this new feature.

MLIR Core

Infrastructure

Table-driven Infrastructure

Code Generation

  • (Standard) Implemented basic optimizations for indexCast.
  • (LLVM/GPU) Modified the default calling convention for MemRefs to avoid stack exhaustion and performance issues on GPUs.
  • (GPU) Landed the initial attribute-based mapping from parallel loops to GPU kernels.
  • (Loops) Implemented simple fusion of parallel loops.
  • (LLVM) Cleanups in LLVM IR dialect and target, intrinsic generator simplification pending.
  • (Vector) Implemented vector reduction operations and WIP on progressive lowering of vector contractions through that.
  • (Vector) Added support for progressive lowering of fused multiple-adds on Vectors down to LLVM intrinsics.
  • (Linalg) Implemented fusion of generic Linalg operations on tensors.
  • (Linalg) Added support for fusion 3+ Linalg ops.

SPIR-V

  • The mlir-vulkan-runner has landed! Now we have a way to perform integration tests on higher level abstractions like other runners.
  • Added resource limits to SPIR-V target environment. This will be used in the future for guiding CodeGen.
  • Introduced spv.func as a better modelling for functions.
  • Fleshed out lots of spv.GroupNonUniform* ops in the SPIR-V dialect.
  • Introduced a pattern to convert Linalg reduction to spv.GroupNonUniform* ops, which requires special capabilities to be available.
  • Introduced dialect-specific attribute, #spv.target_env, for expressing the target environment.
  • Progressing on improving SPIR-V lowering patterns/passes composability and reusability; landed a few patches, still work to do.

Other

In the Ecosystem

PlaidML was added to the list of users of MLIR, feel free to send a pull-request to add your project as well!

Flang

IREE

  • [TF Support] Prototyping TensorList support in progress: added a tf_tensorlist dialect, its companion IREE tensorlist dialect, and introduced a VM custom module for it.
  • [TF Support] Prototyping strings support in progress: added a dialect for TF strings.
  • [GPU CodeGen] Landed the pipeline that goes from HLO to Linalg to Loops to GPU to SPIR-V with correctness tests for pointwise ops, and working on expanding op coverage at different lowering steps.
  • [HAL CPU Backend] Working on bring up LLVM JIT as IREE HAL backend.
  • [HAL Intepreter] Working on bring up VMLA as the new HLO-level interpreter.

TensorFlow

The work is divided between three areas:

  1. TensorFlow to TensorFlow Lite converter: there has been significant progress recently, the tool is getting close to release. Interesting work on quantization is happening and a more complete doc is coming.
  2. TF/XLA bridge: with TPU as the first target, the sequence of passes is almost complete. The tests are a good way to exercise and play with individual passes.
  3. General infrastructure development to prepare the “after” GraphDef as an optimization and runtime format. The recent layout optimization pass is an example of the direction to use MLIR for the core of TensorFlow rewrites.

Recent publications

A functional pattern-based language in MLIR

AccML 2020: Accelerated Machine Learning 2020
Martin Lücke, Michel Steuwer, and Aaron Smith
https://michel.steuwer.info/publications/2020/AccML/

15 Likes

Thanks for sharing this. :+1:

Work in progress, plan to publish on 2/21, community contribution are welcome :slight_smile:

3 Likes

Hi!

Not sure whether this is applicable here (since it’s from the last month), but just in case related publications are on-topic, here’s a recent one:

A functional pattern-based language in MLIR
AccML 2020: Accelerated Machine Learning 2020
Martin Lücke, Michel Steuwer, and Aaron Smith
https://michel.steuwer.info/publications/2020/AccML/

Thanks for putting this together!!!