Great question again, and to that I point to the history of TF again, they used to have stateful tensorArrays with dynamic sizes, but then they moved to stateless TensorLists
When one pushes or pops from these lists (variant based) they get back a new list with a new size. So semantically it is NOT a dynamic size list it is literally fixed size.
The TF runtime notices that the input list is never used again and then mutates the list in place, it is a lower level dialect or runtime optimization. Thus tensor of tensor objects can be valid representation.
As to the question of uniformity:
It is uniform in the fact that the tensor of tensor objects will have types
beyond that it can be a tensor with 2 tensors in it where the 2 tensors CAN have 2 different sizes (same rank though), this is absolutely necessary because it is an actual use case, imagine a 2 iteration while loop with a concat going on inside it, the intermediate spit out for use in gradient pass added to this list will be sizes:
This is actually something missing in the scf dialect type validation we pointed out in previous posts and are trying to upstream a potential solution for.