Skip to content

Commit

Permalink
Enhance Documentation: Specify Model Saving Criteria Post-Training (#363
Browse files Browse the repository at this point in the history
)

* Enhance Documentation: Specify Model Saving Criteria Post-Training

The current documentation lacks information about the type of model saved after training—whether it's the last checkpoint, the best checkpoint based on a specific metric, or another criterion. This crucial detail is essential for users familiar with deep learning who require clarity on model saving procedures. Adding this information to the documentation would greatly benefit users seeking to understand and optimize their workflow without needing to delve into the codebase.

* Update README.adoc
  • Loading branch information
GrunCrow authored Aug 7, 2024
1 parent 9c2f852 commit 13fa551
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions README.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -748,6 +748,9 @@ Here is a list of all command line arguments:
--autotune_trials, Number of training runs for hyperparameter tuning. Defaults to 50.
--autotune_executions_per_trial, The number of times a training run with a set of hyperparameters is repeated. Defaults to 1.
----

**The script saves the trained classifier model based on the best validation loss achieved during training. This ensures that the model saved is optimized for performance according to the chosen metric.**

+
. After training, you can use the custom trained classifier with the `--classifier` argument of the `analyze.py` script. If you want to use the custom classifier in Raven, make sure to set `--model_format raven`.
+
Expand Down

0 comments on commit 13fa551

Please sign in to comment.