Consider an op:

```
%1 = "foo.transpose"(%0) {permutation = [0,2,1]} : (tensor<3x4x5xf32>) -> tensor<3x5x4xf32>
```

This is a common operation in most “ML dialects”. For more details, see e.g. XLA’s description of the semantics.

We currently model this in the `xla_hlo`

dialect downstream as an I64ElementsAttr.

However, I’ve found that to be pretty clunky, and I find myself constantly using helpers like:

```
auto extract1DVector = [](DenseIntElementsAttr elements) {
SmallVector<int64_t, 6> ret;
for (const APInt &element : elements) {
ret.push_back(element.getLimitedValue());
}
return ret;
};
auto make1DElementsAttr = [&rewriter](ArrayRef<int64_t> integers) {
auto type = RankedTensorType::get({static_cast<int64_t>(integers.size())},
rewriter.getIntegerType(64));
return DenseIntElementsAttr::get(type, integers);
};
```

I’d like to reduce the boilerplate associated with this. I was thinking of just adding similar helpers to DenseIntElementsAttr proper, but was wondering if folks had any thoughts on better modeling of this before adding special-casey stuff to DenseIntElementsAttr for this.

Thoughts?