Connecting op verifiers to analyses

I have a project where the following situation happens repeatedly. There exists a rather expensive (e.g., polyhedral model) yet frequently used analysis that expects the ops it process to have passed the verifier. At the same time, the verification logic itself largely overlaps with the analysis, and is similarly expensive. I’d like to reuse the results of the expensive computation between the analysis and the verifier.

It looks like most of what I need can be achieved by giving the op verification hook access to the analysis manager. In this case, the verifier will get the analysis from the manager, and the following passes will be able to reuse it. If passes preserve the analysis, the verifier can reuse it again. This looks like mostly plumbing work, and is in theory sufficient for my use case.

A potentially more interesting scenario is a failable analysis, i.e., a situation where an analysis cannot be performed, which indicates a problem with the IR and should be reported by the verifier. It’s unclear to me whether we would like to support such things natively in the infrastructure or not.

Thoughts? Ideas?

1 Like

I think this is an abuse of the verifier. The individual verify() hooks should only check simple (ideally local) properties that must always be held by operations. More complicated analyses (e.g. I was just fighting with IsolatedFromAbove checking) turn into N^2 compile time messes - recall that the verifier is invoked frequently.

Also, recall that the verifier is not a pass. It is used outside the pass manager in many places, e.g. by the .mlir parser.

I don’t know exactly what you’re looking to achieve with this analysis, but I’d encourage you to find a way to encode the information you need directly in the IR itself somehow. This allows you to persistent it, makes it easy to test the updating logic, etc.


You are right that I am checking non-local properties, although “local” is a flexible concept given MLIR’s nested nature. One could claim that checking relations between operations in the verifier of their common parent, e.g., a function, is “local” with respect to that function.

If this doesn’t belong to the verifier, where does it belong?

This is exactly why I want to cache the results rather than recompute them every time the verifier is invoked. Only a small subset of passes will affect them in my case.

There are actually very few places where the (operation) verifier is called: pass manager and IR construction (in-tree we have two cases - parser and translation from LLVM IR + one per example in toy and a couple of tests that I don’t count). Any project constructing MLIR presumably calls the verifier once it finished construction, but I don’t think there are other use cases. If it is running passes after constructing the IR, which it most likely does, it can just as well create the pass manager before running the verifier. This isn’t mandatory and the verifiers that choose to support it will also have to support the case of missing the analysis manager, and creating analyses themselves.

As for the use in the pass manager, verifier is indeed not a pass, but the call to mlir::verify is just next to the use of the analysis manager here - llvm-project/Pass.cpp at 896f9bc350eba0baf17f1ceae7383d88f0ce2a85 · llvm/llvm-project · GitHub - it’s really just a matter of passing it through.

You can define your own pass. This is not atypical for dialects that have multiple phases anyway. For example, the FIRRTL dialect in CIRCT has several different internal phases “High, Mid, Low” which impose more invariants the further you lower. This means that the invariants the Low phase imposes (e.g. that all widths are inferred) are not checked by the verifier. We have a specific pass to do that.

SIL in Swift has the same thing going on, because some invariants are non-local (e.g. that all values are defined before use). It’s “definitive initialization” pass does the dataflow analysis pass to enforce this and rejects malformed IR with errors. Downstream passes assume this has been done. There are several other examples of this.

Right, I’m suggesting you cache them in the IR. You can do that in many ways, including adding a new node with a big attribute dictionary on it.

 alex.metadataholder {attributes that encode expensive and interesting things} {
    affine.for %arg0 ...
      affine.for %arg1 ...

or whatever.

I feel like you’re looking at this in an empirical way (it is true that it is called in few places) which ignores the design intent that went into this. The verify() hook is a hook like fold(): it is intended to be usable by a number of different clients, it isn’t an implementation detail of a particular pass. Just like fold() isn’t an implementation detail of canonicalize.


I have half a dozen analyses/verifications that depend on each other, and some of which are invalidated by some of the two dozen passes in the pipeline but not all passes. Unlike FIRRTL and SIL, all verifications must succeed every time, there are no requirements that get progressively imposed. I suppose I can put a “verification pass” before and after every “real” pass. That sounds like I’ll be replacing all of the verification with that, because having two different things to do verification increases test complexity and maintenance load. I would have hoped for better support from the infrastructure…

Are you suggesting the traditional polyhedral form? :wink:

Without going into the discussion on whether practical uses should override design intent, what do you see as use scenarios for this hook other than constructing IR and checking pass sanity?

A key example that is important to me are “graph building APIs” (both for hardware and ML).

Consider something like a Pytorch/numpy tracing implementation that builds a graph as user-defined code (e.g. in python) is executed. Each operation you stage creates an op. Being able to call the verifier on it as you go is incredibly valuable, because simple shape errors can be immediately reported to the user.

This isn’t a theoretical example, we have this exact use case on the hardware side, but instead of shape errors they are more complicated type constraints.


I wonder if there is a middle-of-the road path we can take here analogous to the verifySymbolUses method on SymbolUserOpInterface, which needs to calculate some expensive-to-compute state (a collection of nested symbol tables) in order to run its verification hook.

I don’t see anything that I would like to have as preventing this use case. You can still run the verifier manually at whatever place. If it is given access to a (non-null) analysis manager, it can use it, otherwise it should compute whatever it needs.

The bug (50701 – MLIR PassManager verifies too much) you have just filed curiously aligns with my line of thought, especially this part:

Passes are allowed to indicate that they don’t change the IR by signaling that they “preserve all
analyses”. When this happens, we shouldn’t run the verifier.

This sounds like the verifier should just be an analysis.