Mhlo to lhlo conversion

Hello,

I am a trying to lower a basic TensorFlow model to one of the MLIR dialects such as Linalg or affine dialects. I am following the pipeline

TensorFlow model(graphdef format) -> MLIR TF (Standard CF) -> MLIR xla_hlo -> MLIR xla_lhlo -> linalg / affine .
I assume that is the correct order of lowering if I am not missing anything.

I used ./tf-mlir-translate --graphdef-to-mlir to convert the graphdef to temp.mlir.

I also used ./tf-opt --tf-standard-pipeline ./temp.mlir -o temp2.mlir to bring it to standard.

Now I am trying to lower it to lhlo using
./tf-opt --xla-hlo-to-lhlo-with-xla temp2.mlir -o lhlo.mlir but facing an issue here. I am getting the following error message.

%constant.10 = f32[200,10]{1,0} constant({…}) xla.mlir:3:1: error: Internal: LHLO opcode constant is not supported.

converting HLO to LHLO

module attributes {tf.versions = {bad_consumers = [], min_consumer = 0 : i32, producer = 0 : i32}} {

^

xla.mlir:3:1: note: see current operation: “module”() ( {

"func"() ( {

^bb0(%arg0: memref<8000xi8>, %arg1: memref<8000xi8>): // no predecessors

"std.return"() : () -> ()

}) {arg0 = {lmhlo.alloc = 0 : index, lmhlo.liveout = true}, arg1 = {lmhlo.alloc = 1 : index}, sym_name = “main.11”, type = (memref<8000xi8>, memref<8000xi8>) -> ()} : () -> ()

"module_terminator"() : () -> ()

}) {tf.versions = {bad_consumers = [], min_consumer = 0 : i32, producer = 0 : i32}} : () -> ()

I assume it has something to do with kConstant Hloopcode not implemented. Am I missing a pass somewhere while lowering or does it have something to do with the model itself? I am a beginner at this and trying to learn more about MLIR.If I am missing something or doing something wrong it would be helpful if you can point me at the right direction.

Thanks,
J