Is the TCP "matmul" op marked NoSideEffect? raised some big questions w.r.t. the meaning of NoSideEffect. I wanted to give a very concrete example of where we miscompile today due to this lack of precision about what NoSideEffect means.

If --loop-invariant-code-motion is run on the following MLIR, both the `divi_unsigned`

and `cmpi`

get hoisted out of the loop, which are both miscompiles. In the case of %trip_count being zero, the transformed program executes undefined behavior / an error, whereas it does not prior to the transformation.

```
func @maybe_divide_by_zero(%lhs: i32, %rhs: i32, %trip_count: index) {
%ci0 = constant 0 : index
%ci1 = constant 1 : index
loop.for %_ = %ci0 to %trip_count step %ci1 {
// Could divide by zero!
%div = divi_unsigned %lhs, %rhs : i32
}
return
}
func @maybe_tensor_shape_mismatch(%lhs: tensor<?xi32>, %rhs: tensor<?xi32>, %trip_count: index) {
%ci0 = constant 0 : index
%ci1 = constant 1 : index
loop.for %_ = %ci0 to %trip_count step %ci1 {
// What if tensor sizes mismatch? Error or UB.
%cmp = cmpi "eq", %lhs, %rhs : tensor<?xi32>
}
return
}
```

How should we model this to allow doing LICM? Do we need a â€śspeculatableâ€ť trait? Do we need to remove NoSideEffect from `divi_unsigned`

and `cmpi`

? Can we model this with the effects system?

Also, the basic arithmetic std ops (addi, etc.) allow tensors as operands, but we donâ€™t define what happens if the shapes mismatch dynamically (UB or an error?). Thatâ€™s probably a discussion for another day and ties into the TCP discussion of modeling errors in the presence of dynamically shaped tensors.