This is part of splitting the std dialect.
tensor
Dialect Scope and Goal
The tensor
dialect is intended to hold core tensor creation and manipulation ops, which are not strongly associated with any particular other dialect or domain abstraction. The primary smoke test of this is ops that make sense for any tensor element type.
We leave it to other dialects to hold the vast swath of possible computations one might want to do on a tensor, such as:
 TOSA ops
 Linalg on tensors.
 ElementwiseMappable ops like
addf
that support tensors  TensorFlowâs various dialects
Examples of potential future additions to the tensor
dialect could be:

tensor.pad
to pad a tensor to a given shape 
tensor.reshape
to reshape a tensor. 
tensor.undef
for creating a tensor with undefined contents but a given shape (with appropriate discussion of course;undef
is tricky ).
This is intended to follow the success of the vector
dialect for having a âhomeâ for ops uniquely related to the vector
type.
Deliberately constrained scope
There is one key difference between vector
and tensor
though. The vector
type has a much more constrained set of use cases due to its careful definition as only representing vectors of machine data types in registers. The tensor
type is (for better or for worse) used to represent all kinds of things, and supports an openended set of element types. Examples:
 representing large, dense aggregations of primitive types, suitable for highperformance numerical computing (such as in
linalg
,mhlo
, or TensorFlow).  representing shapes in the
shape
dialect, which consist of small 1D tensors ofindex
data type.  representing aggregations of strings or âvariantâ types, such as in the TensorFlow dialect in certain cases
 representing large, sparse aggregations of primitive types, suitable for highperformance numerical computing
Thus, for the tensor
dialect, we prefer for now to constrain the documented scope as much as possible. The expectation is that at some point in the future, the tensor
dialectâs scope could be broadened through a careful discussion of the tradeoffs. We believe that this RFC is an incremental step towards reconciling these disparate use cases.
Ops being moved
I propose to move the following ops (with the following suggested renames) from std
to tensor
.

std.tensor_cast
>tensor.cast

std.extract_element
>tensor.extract_element

std.tensor_from_elements
>tensor.from_elements

std.subtensor
>tensor.slice

std.subtensor_insert
>tensor.insert_slice

std.dynamic_tensor_from_elements
>tensor.generate
Notable noninclusions (at least initially)
For now, we are not including certain ops that are less obvious and merit their own discussion that would add too much complexity to this RFC. Postponing decisions on these ops doesnât seem to affect the overall direction of the RFC.

std.constant
: In the future, it might make sense forstd.constant
to be disaggregated intovector.constant
,tensor.constant
,int.constant
, etc.
std.splat
: will probably need to be handled similarly tostd.constant
, and also enhanded to support dynamic shapes for the tensor case  These discussions should wait until more of the
std
dialect has been split.


std.dim
(andstd.rank
): this op works on both tensors and memrefs. It seems liketensor.dim/rank
andmemref.dim/rank
might make sense to split. This discussion should wait until we have split the
memref
dialect out ofstd
.
 This discussion should wait until we have split the
 tensor/memref bridging ops:
std.tensor_load
,std.tensor_store
,std.tensor_to_memref
. Given that these assume element types compatible withmemref
, they are not suitable for thetensor
dialect as proposed in this RFC.
Development process
We expect that split proposed in this RFC can be accomplished in one or a handful of patches, as it is just dialect boilerplate + migrating some op definitions. Some modernization patches might also be involved as part of the split.
After the initial split, we expect to circle back to some of the more ambiguous ops listed in âNotable noninclusionsâ, which would add a few more ops to the dialect.