Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loss sometimes goes to nan even with the gradient clipping #2

Open
carpedm20 opened this issue Dec 31, 2015 · 6 comments
Open

Loss sometimes goes to nan even with the gradient clipping #2

carpedm20 opened this issue Dec 31, 2015 · 6 comments
Labels

Comments

@carpedm20
Copy link
Owner

Didn't figure out why yet and any advice for this is welcome!

@carpedm20 carpedm20 added the bug label Dec 31, 2015
@jli05
Copy link

jli05 commented Dec 31, 2015

not sure if it’s related but do softmax(xxxx + eps)

On 31 Dec 2015, at 23:13, Taehoon Kim [email protected] wrote:

Didn't figure out why yet and any advice for this is welcome!


Reply to this email directly or view it on GitHub #2.

@carpedm20
Copy link
Owner Author

@jli05 Thanks! I'll try it. I could only learn NTM with max_length=10 since now without nan loss. If it becomes more than 10, I think we need more than 100000 epochs which is different from referenced code.

@EderSantana
Copy link

@carpedm20 in my NTM implementation (and in a couple of others I saw out there) nans were usually caused by one of the following:

  • Initializing the memory to zero. The memory appears in the denominator of the cosine distance and that makes it nan. Check if that is not your case and possibly add a small constant in the denominator and avoid initializing the memory to all zeros (make it a small constant).
  • negative sharpening value. that creates a complex number and also makes the cost function go nan

I think there was a third case but I don't remember right now. Good luck debugging! :D

@lixiangnlp
Copy link

@EderSantana could you explain what is the meaning of negative sharpening value? Thanks

@EderSantana
Copy link

the sharpening value is is uses as pow(input, sharpening). So it can't be negative. Use a nonlinearity like softplus to avoid getting negative values: sharpening = tf.nn.softplus(sharpening).

@therealjtgill
Copy link

Having a negative sharpening value wouldn't make a real become imaginary. But in the paper Graves explicitly states that the sharpening value is >= 1, so softplus(gamma) + 1 would work fine.

a^(-b) = 1/(a^b)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants