-
Notifications
You must be signed in to change notification settings - Fork 180
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Added condition for different schedulers (#691)
* Added condition for different schedulers * Added test functions for the schedulers and also updated the documentation * Modified the test function for configure_optimizers
- Loading branch information
1 parent
199d3b7
commit 42d0d26
Showing
6 changed files
with
231 additions
and
28 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,35 +1,60 @@ | ||
# Config file for DeepForest pytorch module | ||
|
||
#cpu workers for data loaders | ||
#Dataloaders | ||
# Cpu workers for data loaders | ||
# Dataloaders | ||
workers: 1 | ||
devices: auto | ||
accelerator: auto | ||
batch_size: 9999 | ||
batch_size: 1 | ||
|
||
#Non-max supression of overlapping predictions | ||
nms_thresh: 0.9 | ||
score_thresh: 0.9 | ||
# Model Architecture | ||
architecture: 'retinanet' | ||
num_classes: 1 | ||
nms_thresh: 0.05 | ||
|
||
train: | ||
# Architecture specific params | ||
retinanet: | ||
# Non-max supression of overlapping predictions | ||
score_thresh: 0.1 | ||
|
||
train: | ||
csv_file: | ||
root_dir: | ||
|
||
#Optomizer initial learning rate | ||
# Optimizer initial learning rate | ||
lr: 0.001 | ||
scheduler: | ||
type: | ||
params: | ||
# Common parameters | ||
T_max: 10 | ||
eta_min: 0.00001 | ||
lr_lambda: "lambda epoch: 0.95 ** epoch" # For lambdaLR and multiplicativeLR | ||
step_size: 30 # For stepLR | ||
gamma: 0.1 # For stepLR, multistepLR, and exponentialLR | ||
milestones: [50, 100] # For multistepLR | ||
|
||
# ReduceLROnPlateau parameters (used if type is not explicitly mentioned) | ||
mode: "min" | ||
factor: 0.1 | ||
patience: 10 | ||
threshold: 0.0001 | ||
threshold_mode: "rel" | ||
cooldown: 0 | ||
min_lr: 0 | ||
eps: 1e-08 | ||
|
||
#Print loss every n epochs | ||
# Print loss every n epochs | ||
epochs: 1 | ||
#Useful debugging flag in pytorch lightning, set to True to get a single batch of training to test settings. | ||
# Useful debugging flag in pytorch lightning, set to True to get a single batch of training to test settings. | ||
fast_dev_run: False | ||
#pin images to GPU memory for fast training. This depends on GPU size and number of images. | ||
# pin images to GPU memory for fast training. This depends on GPU size and number of images. | ||
preload_images: False | ||
|
||
validation: | ||
#callback args | ||
# callback args | ||
csv_file: | ||
root_dir: | ||
#Intersection over union evaluation | ||
# Intersection over union evaluation | ||
iou_threshold: 0.4 | ||
val_accuracy_interval: 5 | ||
val_accuracy_interval: 20 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters