We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
首先所贴的示例code,个人认为第二行多余,让人容易误解,如下: logits = tf.matmul(y4, w5) + b5 y = tf.nn.softmax(logits) cross_entropy = tf.nn.softmax_cross_entropy_with_logits( logits=logits, labels=t)
其次从这里的注释来看:https://github.com/tensorflow/tensorflow/blob/r1.0/tensorflow/examples/tutorials/mnist/mnist_softmax.py#L48 采用softmax_cross_entropy_with_logits,与先softmax、再手工计算交叉熵、再在batch内item间取交叉熵的均值,数学效果等同,似乎无法避免文中介绍的log(0)导致NaN问题。
The text was updated successfully, but these errors were encountered:
No branches or pull requests
首先所贴的示例code,个人认为第二行多余,让人容易误解,如下:
logits = tf.matmul(y4, w5) + b5
y = tf.nn.softmax(logits)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits( logits=logits, labels=t)
其次从这里的注释来看:https://github.com/tensorflow/tensorflow/blob/r1.0/tensorflow/examples/tutorials/mnist/mnist_softmax.py#L48
采用softmax_cross_entropy_with_logits,与先softmax、再手工计算交叉熵、再在batch内item间取交叉熵的均值,数学效果等同,似乎无法避免文中介绍的log(0)导致NaN问题。
The text was updated successfully, but these errors were encountered: