Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

谢谢作者的时间,我运行了代码后出现这样的错误 #8

Open
yangtutuaka opened this issue Jun 7, 2023 · 3 comments
Open

Comments

@yangtutuaka
Copy link

(base) C:\YCRS_DATA\YCR_Code\pytorch-saltnet-master>python train.py --vtf --pretrained imagenet --loss-on-center --batch-size 32 --optim adamw --learning-rate 5e-4 --lr-scheduler noam --basenet senet154 --max-epochs 250 --data-fold fold0 --log-dir runs/fold0 --resume runs/fold0/checkpoints/last-checkpoint-fold0.pth
Load dataset list_train0_3600: 100%|█████████████████████████████████████████| 3599/3599 [00:03<00:00, 1018.55images/s]
Load dataset list_valid0_400: 100%|█████████████████████████████████████████████| 399/399 [00:00<00:00, 974.61images/s]
Load dataset list_valid0_400: 100%|█████████████████████████████████████████████| 399/399 [00:00<00:00, 994.73images/s]
use cuda
N of parameters 827
resuming a checkpoint 'runs/fold0/checkpoints/last-checkpoint-fold0.pth'

Warning the checkpoint 'runs/fold0/checkpoints/last-checkpoint-fold0.pth' doesn't exist! training from scratch!

logging into runs/fold0
training unet...
0%| | 0/250 [00:00<?, ?it/s]C:\Users\ChenRui.Yang\anaconda3\lib\site-packages\torch\optim\lr_scheduler.py:131: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). "****

@yangtutuaka
Copy link
Author

尝试了好多解决方法,都不行QAQ

@xuyuan
Copy link
Collaborator

xuyuan commented Jun 8, 2023

这个代码是在PyTorch0.4上跑的,在新版本中要在lr_scheduler.step() 之前调用 optimizer.step()。
不过似乎这只是一个警告,代码还是一样运行

@yangtutuaka
Copy link
Author

谢谢作者的时间,先期我也查到是这个问题,我尝试做了修改(不太会修改),在train.py的357行之后加入了
optimizer.step() # 先调用 optimizer.step()
,但是不奏效,代码运行结果一直会卡在这步,不进行计算

logging into runs/fold0
training unet...
0%| | 0/250 [00:00<?, ?it/s]C:\Users\ChenRui.Yang\anaconda3\lib\site-packages\torch\optim\lr_scheduler.py:131: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). "****

运行结果还是这样,我不知道问题出在哪里了

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants