You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In listing 3.5, I think the weights are not reinitialized before each run. If I add a line to print out model weights, we can see that the model weights are non-zero after the first time through the loop.
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
for reg_lambda in np.linspace(0, 1, 100):
w_val = sess.run(w); print(w_val) # <----------- ADDED THIS -------------
for epoch in range(training_epochs):
sess.run(train_op, feed_dict={X:x_train, Y:y_train})
final_cost = sess.run(cost, feed_dict={X:x_test, Y:y_test})
print('reg lambda', reg_lambda)
print('final cost', final_cost)
sess.close()
In listing 3.5, I think the weights are not reinitialized before each run. If I add a line to print out model weights, we can see that the model weights are non-zero after the first time through the loop.
This prints out:
I think the correct behavior is to reinitialize the weights for each value of lambda. I also captured the cost for each lambda.
Now if plot lambdas vs cost, it's a horizontal line.
It seems that changing reg_lambda is having no effect, but that's not easy to see when the weights are reused.
The text was updated successfully, but these errors were encountered: