Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

loss nan #15

Open
rangmiao opened this issue Sep 13, 2020 · 1 comment
Open

loss nan #15

rangmiao opened this issue Sep 13, 2020 · 1 comment

Comments

@rangmiao
Copy link

hello, i have a problem. when I use ffhq data with 256x256, g loss and d loss is nan.
image

here is log:
D_loss: nan, D_loss_grad_norm: nan, D_lr: 0.001882
D_reg: 0.002352, D_reg_grad_norm: 0.001569, G_loss: nan
G_loss_grad_norm: nan, G_lr: 0.0016, G_reg: nan
G_reg_grad_norm: nan, pl_avg: nan, seen: 15
0%| | 14/1000000 [00:51<634:57:52, 2.29s/it]

@adriansahlman
Copy link
Owner

Hey,

Is the loss nan from the very first iteration? Or does it become nan after a couple of iterations?

What settings do you use when you run the training?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants