You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
6 frames
/usr/local/lib/python3.6/dist-packages/torch/_utils.py in reraise(self)
383 # (https://bugs.python.org/issue2651), so we work around it.
384 msg = KeyErrorMessage(msg)
--> 385 raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 2.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/content/Monk_Object_Detection/18_mmaction/lib/mmaction/datasets/base.py", line 134, in getitem
return self.prepare_train_frames(idx)
File "/content/Monk_Object_Detection/18_mmaction/lib/mmaction/datasets/base.py", line 116, in prepare_train_frames
return self.pipeline(results)
File "/content/Monk_Object_Detection/18_mmaction/lib/mmaction/datasets/pipelines/compose.py", line 41, in call
data = t(data)
File "/content/Monk_Object_Detection/18_mmaction/lib/mmaction/datasets/pipelines/loading.py", line 698, in call
container = decord.VideoReader(file_obj, num_threads=self.num_threads)
File "/usr/local/lib/python3.6/dist-packages/decord/video_reader.py", line 47, in init
raise RuntimeError("Error reading " + uri + "...")
RuntimeError: Error reading 17727488 bytes...
The text was updated successfully, but these errors were encountered:
loading dataset ...
loading_model ...
2020-11-02 16:06:51,083 - mmaction - INFO - These parameters in pretrained checkpoint are not loaded: {'fc.weight', 'fc.bias'}
2020-11-02 16:06:51,142 - mmaction - INFO - load checkpoint from https://openmmlab.oss-accelerate.aliyuncs.com/mmaction/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb/tsn_r50_video_1x1x8_100e_kinetics400_rgb_20200702-568cde33.pth
creating workspace directory ...
Done
Starting to train ...
2020-11-02 16:06:53,802 - mmaction - WARNING - The model and loaded state dict do not match exactly
size mismatch for cls_head.fc_cls.weight: copying a param with shape torch.Size([400, 2048]) from checkpoint, the shape in current model is torch.Size([13, 2048]).
size mismatch for cls_head.fc_cls.bias: copying a param with shape torch.Size([400]) from checkpoint, the shape in current model is torch.Size([13]).
2020-11-02 16:06:53,805 - mmaction - INFO - Start running, host: root@5f8f50143665, work_dir: /content/work_dirs
2020-11-02 16:06:53,806 - mmaction - INFO - workflow: [('train', 1)], max: 100 epochs
2020-11-02 16:08:10,766 - mmaction - INFO - Epoch [1][10/92] lr: 1.000e-03, eta: 19:38:39, time: 7.695, data_time: 6.966, memory: 6252, top1_acc: 0.1375, top5_acc: 0.5375, loss_cls: 2.5019, loss: 2.5019, grad_norm: 9.7265
2020-11-02 16:08:51,116 - mmaction - INFO - Epoch [1][20/92] lr: 1.000e-03, eta: 14:57:22, time: 4.035, data_time: 3.349, memory: 6252, top1_acc: 0.1750, top5_acc: 0.6625, loss_cls: 2.3643, loss: 2.3643, grad_norm: 9.8010
2020-11-02 16:10:06,899 - mmaction - INFO - Epoch [1][30/92] lr: 1.000e-03, eta: 16:23:39, time: 7.578, data_time: 6.895, memory: 6252, top1_acc: 0.2125, top5_acc: 0.6625, loss_cls: 2.3160, loss: 2.3160, grad_norm: 10.1025
RuntimeError Traceback (most recent call last)
in ()
----> 1 gtf.Train();
6 frames
/usr/local/lib/python3.6/dist-packages/torch/_utils.py in reraise(self)
383 # (https://bugs.python.org/issue2651), so we work around it.
384 msg = KeyErrorMessage(msg)
--> 385 raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 2.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/content/Monk_Object_Detection/18_mmaction/lib/mmaction/datasets/base.py", line 134, in getitem
return self.prepare_train_frames(idx)
File "/content/Monk_Object_Detection/18_mmaction/lib/mmaction/datasets/base.py", line 116, in prepare_train_frames
return self.pipeline(results)
File "/content/Monk_Object_Detection/18_mmaction/lib/mmaction/datasets/pipelines/compose.py", line 41, in call
data = t(data)
File "/content/Monk_Object_Detection/18_mmaction/lib/mmaction/datasets/pipelines/loading.py", line 698, in call
container = decord.VideoReader(file_obj, num_threads=self.num_threads)
File "/usr/local/lib/python3.6/dist-packages/decord/video_reader.py", line 47, in init
raise RuntimeError("Error reading " + uri + "...")
RuntimeError: Error reading 17727488 bytes...
The text was updated successfully, but these errors were encountered: