You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Warning the checkpoint 'runs/fold0/checkpoints/last-checkpoint-fold0.pth' doesn't exist! training from scratch!
logging into runs/fold0
training unet...
0%| | 0/250 [00:00<?, ?it/s]C:\Users\ChenRui.Yang\anaconda3\lib\site-packages\torch\optim\lr_scheduler.py:131: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). "****
The text was updated successfully, but these errors were encountered:
logging into runs/fold0
training unet...
0%| | 0/250 [00:00<?, ?it/s]C:\Users\ChenRui.Yang\anaconda3\lib\site-packages\torch\optim\lr_scheduler.py:131: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). "****
(base) C:\YCRS_DATA\YCR_Code\pytorch-saltnet-master>python train.py --vtf --pretrained imagenet --loss-on-center --batch-size 32 --optim adamw --learning-rate 5e-4 --lr-scheduler noam --basenet senet154 --max-epochs 250 --data-fold fold0 --log-dir runs/fold0 --resume runs/fold0/checkpoints/last-checkpoint-fold0.pth
Load dataset list_train0_3600: 100%|█████████████████████████████████████████| 3599/3599 [00:03<00:00, 1018.55images/s]
Load dataset list_valid0_400: 100%|█████████████████████████████████████████████| 399/399 [00:00<00:00, 974.61images/s]
Load dataset list_valid0_400: 100%|█████████████████████████████████████████████| 399/399 [00:00<00:00, 994.73images/s]
use cuda
N of parameters 827
resuming a checkpoint 'runs/fold0/checkpoints/last-checkpoint-fold0.pth'
logging into runs/fold0
training unet...
0%| | 0/250 [00:00<?, ?it/s]C:\Users\ChenRui.Yang\anaconda3\lib\site-packages\torch\optim\lr_scheduler.py:131: UserWarning: Detected call of
lr_scheduler.step()
beforeoptimizer.step()
. In PyTorch 1.1.0 and later, you should call them in the opposite order:optimizer.step()
beforelr_scheduler.step()
. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-ratewarnings.warn("Detected call of
lr_scheduler.step()
beforeoptimizer.step()
. "****The text was updated successfully, but these errors were encountered: