Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Freeze model doesn't work #149

Open
glprophet opened this issue Aug 16, 2018 · 3 comments
Open

Freeze model doesn't work #149

glprophet opened this issue Aug 16, 2018 · 3 comments

Comments

@glprophet
Copy link

Hi,

I'm trying to freeze a model I trained. It fails with the message can't find a node "transformer/parallel_0_5/transformer/body/decoder/"
"layer_0/self_attention/multihead_attention/dot_product_attention/"
"Softmax"
Your output_node_names are wrong. Could you please fix it or at least to publish a list of correct nodes I can use.

Thanks,
glprophet

@eliaho
Copy link

eliaho commented Jan 13, 2019

did anyone manged to solve this issue?

@luduling
Copy link

luduling commented Aug 1, 2019

env: Tensorflow 1.12.0 Tensor2Tensor 1.7
modify g2p.py the line output_node_names = ["transformer/parallel_0_5/transformer/body/decoder/"
"layer_0/self_attention/multihead_attention/dot_product_attention/"
"Softmax"...]
To
output_node_names = ["transformer/parallel_0_4/transformer/transformer/body/encoder/"
"layer_0/self_attention/multihead_attention/dot_product_attention/"
"attention_weights",
"transformer/parallel_0_4/transformer/transformer/body/encoder/"
"layer_1/self_attention/multihead_attention/dot_product_attention/"
"attention_weights",
"transformer/parallel_0_4/transformer/transformer/body/encoder/"
"layer_2/self_attention/multihead_attention/dot_product_attention/"
"attention_weights",
"transformer/parallel_0_4/transformer/transformer/body/decoder/"
"layer_0/self_attention/multihead_attention/dot_product_attention/"
"attention_weights",
"transformer/parallel_0_4/transformer/transformer/body/decoder/"
"layer_0/encdec_attention/multihead_attention/dot_product_attention/"
"attention_weights",
"transformer/parallel_0_4/transformer/transformer/body/decoder/"
"layer_1/self_attention/multihead_attention/dot_product_attention/"
"attention_weights",
"transformer/parallel_0_4/transformer/transformer/body/decoder/"
"layer_1/encdec_attention/multihead_attention/dot_product_attention/"
"attention_weights",
"transformer/parallel_0_4/transformer/transformer/body/decoder/"
"layer_2/self_attention/multihead_attention/dot_product_attention/"
"attention_weights",
"transformer/parallel_0_4/transformer/transformer/body/decoder/"
"layer_2/encdec_attention/multihead_attention/dot_product_attention/"
"attention_weights"]

Remember to reinstall by command:
python setup.py install

Done!

@glprophet
Copy link
Author

Hi,

Thanks for fixing it. I can freeze the graph now. However, in order to use the frozen graph in production, the graph should contain placeholder(s). I guess you should add an input placeholder node to the graph before freezing it. Could you please fix it or post a code snippet of how it can be done?

Thanks,
glprophet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants