[TensorFlow] how to set -tf-output-arrays command-line option when lowering a tf training pbtxt file

I have been trying to translate tensorflow models (xxx.pbtxt) into MHLO file (xxx-mhlo.mlir).

for inference, the model graph is weight frozen, and always only Forward pass, with a obvious output node like sigmoid or softmax, take lenet as a example, the command is as following:

tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false lenet-infer.pbtxt -tf-input-arrays=input0 -tf-input-data-types=DT_FLOAT -tf-input-shapes=16,784 -tf-output-arrays=softmax0
-o 2lenet.mlir
the infer model achieves the expected results.

for training, the model graph is always Forward pass and Backward pass, and without a obvious output node, like loss or traing_op or optimizer or summary, which op should i set as a output node?

tf-mlir-translate -graphdef-to-mlir -tf-enable-shape-inference-on-import=false lenet-train.pbtxt -tf-input-arrays=input0,label -tf-input-data-types=DT_FLOAT,DT_FLOAT -tf-input-shapes=16,784:16,10 -tf-output-arrays=???
-o 2lenet.mlir

which node should i set as a tf-output-array?

is there anyone who make a complete translation from a tf training model pbtxt to mlir? :slightly_frowning_face:

This question may be better suited for the TensorFlow forum. But I’ll answer a bit here

tf-mlir-translate is mostly a testing and prototyping tool. You’ll see the same functions are called internally during our TF runs but invoked from TF (e.g., we dont shell out to this tool) and that is more user friendly/transparent (so we are using these on a lot of jobs per day). That said it is convenient tool and you can look at the test directory for some examples, but it will have rough edges. The output names that needs to be specified there does not have a general answer (SavedModel may be easier here as they have signatures). For training (well any graph) the output nodes are the ones whose values you want to return: anything that doesn’t feed into an output may be pruned. When you construct your model you can often use name scopes to make the nodes easier to find, else you can insert an additional named identity node in your graph and then use that too (even modifying the graph to insert a print and then checking what feeds into it), or you open the graph (whether in text or visualized) and find node of interest (also easier if you use tf.function), or you can instrument different parts (from Grappler to enabling logging on executor) and run a single training step to see what is fetched.

The graphdef does not, in general, capture what will be returned and so that has to be provided.

1 Like

EDIT: After initial answer, I noticed that you are requesting for the names during training.

I think the utility summarize_graph can help:

summarize_graph \
  --in_graph=${MODELDIR}/${MODELNAME}.${MODELEXT} 2>&1 | tee $FILE

as it will print out all the possible input and output tensors of the protobuf file (before freezing it).

Thanks a million. I gained a lot from your kind and warm answer. I’m going to do as following:

  1. check which nodes are of interest and make sure they feeding into at least one outout node, and add the output nodes to the output name;
  2. to understand the functions in tf-mlir-translate and trying to use them directly

Any idea or suggestions is highly welcomed.

thanks a lot. i’ll try