You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi!
thank you for sharing your work
When I ues google colab
run"python -u script/gen.py --config_path ./config/qm9_default.yml --generator ConfGF --smiles c1ccccc1"
it returned an error :RuntimeError: The 'data' object was created by an older version of PyG. If this error occurred while loading an already existing dataset, remove the 'processed/' directory in the dataset's root folder and try again.
could you please tell me how to fix this?
whole output:
Let's use 1 GPUs!
Using device cuda:0 as main device
{'train': {'batch_size': 128, 'seed': 2021, 'epochs': 300, 'shuffle': True, 'resume_train': False, 'eval': True, 'num_workers': 0, 'gpus': [0], 'anneal_power': 2.0, 'save': True, 'save_path': '/home/shichenc/scratch/confgf/train', 'resume_checkpoint': None, 'resume_epoch': None, 'log_interval': 400, 'optimizer': {'type': 'Adam', 'lr': 0.001, 'weight_decay': 0.0, 'dropout': 0.0}, 'scheduler': {'type': 'plateau', 'factor': 0.6, 'patience': 10, 'min_lr': '1e-4'}, 'device': device(type='cuda', index=0)}, 'test': {'init_checkpoint': '/home/shichenc/scratch/confgf/train/qm9_default', 'output_path': '/home/shichenc/scratch/confgf/test/qm9_default', 'epoch': 284, 'gen': {'dg_step_size': 3.0, 'dg_num_steps': 1000, 'steps_d': 100, 'step_lr_d': 2e-06, 'steps_pos': 100, 'step_lr_pos': 2.4e-06, 'clip': 1000, 'min_sigma': 0.0, 'verbose': 1}}, 'data': {'base_path': '/content/', 'dataset': 'qm9', 'train_set': 'train_data_40k.pkl', 'val_set': 'val_data_5k.pkl', 'test_set': 'test_data_200.pkl'}, 'model': {'name': 'qm9_default', 'hidden_dim': 256, 'num_convs': 4, 'sigma_begin': 10, 'sigma_end': 0.01, 'num_noise_level': 50, 'order': 3, 'mlp_act': 'relu', 'gnn_act': 'relu', 'cutoff': 10.0, 'short_cut': True, 'concat_hidden': False, 'noise_type': 'symmetry', 'edge_encoder': 'mlp'}}
set seed for random, numpy and torch
loading data from /content/qm9_processed
train size : 0 || val size: 0 || test size: 24068
loading data done!
got 200 molecules with 24068 confs
Traceback (most recent call last):
File "script/gen.py", line 92, in
test_data = dataset.GEOMDataset_PackedConf(data=test_data, transform=transform)
File "/content/ConfGF/confgf/dataset/dataset.py", line 449, in init
self._pack_data_by_mol()
File "/content/ConfGF/confgf/dataset/dataset.py", line 469, in _pack_data_by_mol
data = copy.deepcopy(v[0])
File "/usr/local/lib/python3.7/copy.py", line 161, in deepcopy
y = copier(memo)
File "/usr/local/lib/python3.7/site-packages/torch_geometric/data/data.py", line 392, in deepcopy
out._store._parent = out
File "/usr/local/lib/python3.7/site-packages/torch_geometric/data/data.py", line 358, in getattr
"The 'data' object was created by an older version of PyG. "
RuntimeError: The 'data' object was created by an older version of PyG. If this error occurred while loading an already existing dataset, remove the 'processed/' directory in the dataset's root folder and try again.
The text was updated successfully, but these errors were encountered:
Thank you for sharing your solutions. The ConfGF was implemented with the old-version Pyg. We plan to integrate the ConfGF into our TorchDrug platform in the near future.
Hi!
thank you for sharing your work
When I ues google colab
run"python -u script/gen.py --config_path ./config/qm9_default.yml --generator ConfGF --smiles c1ccccc1"
it returned an error :RuntimeError: The 'data' object was created by an older version of PyG. If this error occurred while loading an already existing dataset, remove the 'processed/' directory in the dataset's root folder and try again.
could you please tell me how to fix this?
whole output:
Let's use 1 GPUs!
Using device cuda:0 as main device
{'train': {'batch_size': 128, 'seed': 2021, 'epochs': 300, 'shuffle': True, 'resume_train': False, 'eval': True, 'num_workers': 0, 'gpus': [0], 'anneal_power': 2.0, 'save': True, 'save_path': '/home/shichenc/scratch/confgf/train', 'resume_checkpoint': None, 'resume_epoch': None, 'log_interval': 400, 'optimizer': {'type': 'Adam', 'lr': 0.001, 'weight_decay': 0.0, 'dropout': 0.0}, 'scheduler': {'type': 'plateau', 'factor': 0.6, 'patience': 10, 'min_lr': '1e-4'}, 'device': device(type='cuda', index=0)}, 'test': {'init_checkpoint': '/home/shichenc/scratch/confgf/train/qm9_default', 'output_path': '/home/shichenc/scratch/confgf/test/qm9_default', 'epoch': 284, 'gen': {'dg_step_size': 3.0, 'dg_num_steps': 1000, 'steps_d': 100, 'step_lr_d': 2e-06, 'steps_pos': 100, 'step_lr_pos': 2.4e-06, 'clip': 1000, 'min_sigma': 0.0, 'verbose': 1}}, 'data': {'base_path': '/content/', 'dataset': 'qm9', 'train_set': 'train_data_40k.pkl', 'val_set': 'val_data_5k.pkl', 'test_set': 'test_data_200.pkl'}, 'model': {'name': 'qm9_default', 'hidden_dim': 256, 'num_convs': 4, 'sigma_begin': 10, 'sigma_end': 0.01, 'num_noise_level': 50, 'order': 3, 'mlp_act': 'relu', 'gnn_act': 'relu', 'cutoff': 10.0, 'short_cut': True, 'concat_hidden': False, 'noise_type': 'symmetry', 'edge_encoder': 'mlp'}}
set seed for random, numpy and torch
loading data from /content/qm9_processed
train size : 0 || val size: 0 || test size: 24068
loading data done!
got 200 molecules with 24068 confs
Traceback (most recent call last):
File "script/gen.py", line 92, in
test_data = dataset.GEOMDataset_PackedConf(data=test_data, transform=transform)
File "/content/ConfGF/confgf/dataset/dataset.py", line 449, in init
self._pack_data_by_mol()
File "/content/ConfGF/confgf/dataset/dataset.py", line 469, in _pack_data_by_mol
data = copy.deepcopy(v[0])
File "/usr/local/lib/python3.7/copy.py", line 161, in deepcopy
y = copier(memo)
File "/usr/local/lib/python3.7/site-packages/torch_geometric/data/data.py", line 392, in deepcopy
out._store._parent = out
File "/usr/local/lib/python3.7/site-packages/torch_geometric/data/data.py", line 358, in getattr
"The 'data' object was created by an older version of PyG. "
RuntimeError: The 'data' object was created by an older version of PyG. If this error occurred while loading an already existing dataset, remove the 'processed/' directory in the dataset's root folder and try again.
The text was updated successfully, but these errors were encountered: