You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The if validation clause is aligned with the outer for loop, which is understandable since we only need to validate the data once after the training process.
In chapter 19 sample code, we have:
# Main training loopforepochinrange(1, epochs+1):
# Print epoch numberprint(f'epoch: {epoch}')
# Reset accumulated values in loss and accuracy objectsself.loss.new_pass()
self.accuracy.new_pass()
# Iterate over stepsforstepinrange(train_steps):
# If batch size is not set -# train using one step and full datasetifbatch_sizeisNone:
batch_X=Xbatch_y=y# Otherwise slice a batchelse:
batch_X=X[step*batch_size:(step+1)*batch_size]
batch_y=y[step*batch_size:(step+1)*batch_size]
# Perform the forward passoutput=self.forward(batch_X, training=True)
# Calculate lossdata_loss, regularization_loss= \
self.loss.calculate(output, batch_y,
include_regularization=True)
loss=data_loss+regularization_loss# Get predictions and calculate an accuracypredictions=self.output_layer_activation.predictions(
output)
accuracy=self.accuracy.calculate(predictions,
batch_y)
# Perform backward passself.backward(output, batch_y)
# Optimize (update parameters)self.optimizer.pre_update_params()
forlayerinself.trainable_layers:
self.optimizer.update_params(layer)
self.optimizer.post_update_params()
# Print a summaryifnotstep%print_everyorstep==train_steps-1:
print(f'step: {step}, '+f'acc: {accuracy:.3f}, '+f'loss: {loss:.3f} ('+f'data_loss: {data_loss:.3f}, '+f'reg_loss: {regularization_loss:.3f}), '+f'lr: {self.optimizer.current_learning_rate}')
# Get and print epoch loss and accuracyepoch_data_loss, epoch_regularization_loss= \
self.loss.calculate_accumulated(
include_regularization=True)
epoch_loss=epoch_data_loss+epoch_regularization_lossepoch_accuracy=self.accuracy.calculate_accumulated()
print(f'training, '+f'acc: {epoch_accuracy:.3f}, '+f'loss: {epoch_loss:.3f} ('+f'data_loss: {epoch_data_loss:.3f}, '+f'reg_loss: {epoch_regularization_loss:.3f}), '+f'lr: {self.optimizer.current_learning_rate}')
# If there is the validation dataifvalidation_dataisnotNone:
# Reset accumulated values in loss# and accuracy objectsself.loss.new_pass()
self.accuracy.new_pass()
# Iterate over stepsforstepinrange(validation_steps):
# If batch size is not set -# train using one step and full datasetifbatch_sizeisNone:
batch_X=X_valbatch_y=y_val# Otherwise slice a batchelse:
batch_X=X_val[
step*batch_size:(step+1)*batch_size
]
batch_y=y_val[
step*batch_size:(step+1)*batch_size
]
# Perform the forward passoutput=self.forward(batch_X, training=False)
# Calculate the lossself.loss.calculate(output, batch_y)
# Get predictions and calculate an accuracypredictions=self.output_layer_activation.predictions(
output)
self.accuracy.calculate(predictions, batch_y)
# Get and print validation loss and accuracyvalidation_loss=self.loss.calculate_accumulated()
validation_accuracy=self.accuracy.calculate_accumulated()
# Print a summaryprint(f'validation, '+f'acc: {validation_accuracy:.3f}, '+f'loss: {validation_loss:.3f}')
If I was correct on
we only need to validate the data once after the training process
then why should we validate the test data through the epochs?
The text was updated successfully, but these errors were encountered:
In Chapter 18, we have the
train
method as:The
if validation
clause is aligned with the outerfor
loop, which is understandable since we only need to validate the data once after the training process.In chapter 19 sample code, we have:
If I was correct on
then why should we validate the test data through the epochs?
The text was updated successfully, but these errors were encountered: