Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run on METR-LA and PEMS-BAY #14

Closed
nnzhan opened this issue Feb 18, 2019 · 10 comments
Closed

run on METR-LA and PEMS-BAY #14

nnzhan opened this issue Feb 18, 2019 · 10 comments

Comments

@nnzhan
Copy link

nnzhan commented Feb 18, 2019

Hi Dr Yin

I am wondering if you can make the code to run on the METR-LA and PEMS-BAY dataset with standard split given by https://github.com/liyaguang/DCRNN. I personally tried to run your codes on those two datasets but can not get expected results. If yes, I will include your methods as a baseline in my recent work.

Thanks

@LongWeiZJU
Copy link

Hi Dr Yin

I am wondering if you can make the code to run on the METR-LA and PEMS-BAY dataset with standard split given by https://github.com/liyaguang/DCRNN. I personally tried to run your codes on those two datasets but can not get expected results. If yes, I will include your methods as a baseline in my recent work.

Thanks

Yeah. I have the same question. It is really time consuming to handle the PeMS dataset.

@VeritasYin
Copy link
Owner

Hi,
Sorry for the late reply.
As for test results on METR-LA, you may refer our recent work at https://arxiv.org/abs/1903.05631. We also provide with the comparison between DCRNN and STGCN on PeMS(M/L).
Regards, ht

@VeritasYin VeritasYin pinned this issue Mar 14, 2019
@xumingxingsjtu
Copy link

Hello, I also have some questions about the results! I have conducted experiments on METR-LA, PeMS-BAY and PeMS(M/L) datasets, all the results are not stable even with the same experiments setting, I wander how did you obtain the results presented in the paper? are they mean value or the best one? How many times have you trained/tested in the same experiments setting to obtain the mean value or by what strategy did you obtain the best results? And how did you perform T-test and obtain the results(\alpha=0.01,P<0.01?)

@VeritasYin
Copy link
Owner

Hi @xumingxingsjtu

Thank you for your feedback. I'm afraid that more details are needed to locate and determine where is the problem in your specific case.

  1. what is your current running environment, such as OS, tensorflow version, GPU, etc.
  2. how do you preprocess the raw data in PeMS(M/L). In particular, which sensor stations you selected for the experiment and how do you construct the corresponding adjacent matrix.
  3. how do you pick up hyper-parameters, such as learning rate, optimizer, batch size, training epochs, etc.
  4. how do you measure the performance of the model, which metric do you apply for evaluation and how do you calculate?
  5. when you refer to 'stable', could you give more details about the model performance?

As for your questions,

  1. The results of our paper are the mean value.
  2. We repeatedly train and test for 5 times to obtain the final result. And we use a grid search strategy to find the best set of parameters.
  3. As for the T-test, we compare the targeted model GCGRU with our model STGCN under the time settings in 15, 30 and 45 mins respectively. For each model at each time slot, we use the output of 5 train/test models to calculate the Mean, Std. Deviation and Std. Error Mean. Then, we conduct independent samples test for both models through SPASS. The report shows that our model outperforms GCGRU with statistical significance (two-tailed T-test, α = 0.01, P < 0.01).

If you have further question, please do not hesitate to contact me. You can also reach me through my email if you would like to share the detailed training issues.

Thank you, ht

@xumingxingsjtu
Copy link

Hi @xumingxingsjtu

Thank you for your feedback. I'm afraid that more details are needed to locate and determine where is the problem in your specific case.

  1. what is your current running environment, such as OS, tensorflow version, GPU, etc.
  2. how do you preprocess the raw data in PeMS(M/L). In particular, which sensor stations you selected for the experiment and how do you construct the corresponding adjacent matrix.
  3. how do you pick up hyper-parameters, such as learning rate, optimizer, batch size, training epochs, etc.
  4. how do you measure the performance of the model, which metric do you apply for evaluation and how do you calculate?
  5. when you refer to 'stable', could you give more details about the model performance?

As for your questions,

  1. The results of our paper are the mean value.
  2. We repeatedly train and test for 5 times to obtain the final result. And we use a grid search strategy to find the best set of parameters.
  3. As for the T-test, we compare the targeted model GCGRU with our model STGCN under the time settings in 15, 30 and 45 mins respectively. For each model at each time slot, we use the output of 5 train/test models to calculate the Mean, Std. Deviation and Std. Error Mean. Then, we conduct independent samples test for both models through SPASS. The report shows that our model outperforms GCGRU with statistical significance (two-tailed T-test, α = 0.01, P < 0.01).

If you have further question, please do not hesitate to contact me. You can also reach me through my email if you would like to share the detailed training issues.

Thank you, ht

Thank you very much for your reply! It helps me a lot. If I have more questions, I will turn to you for more advices!
Best wishes!

@xumingxingsjtu
Copy link

Hello, where can i get the PeMS-D7 L dataset? There is only PeMSD7 M dataset included in the code, and i want to test the code in larger dataset

@xumingxingsjtu
Copy link

Furthermore, where can i get the map? I want to visualize the sensors on the map. Thank you!

@VeritasYin
Copy link
Owner

@xumingxingsjtu
Hi, we will upload the full list of station ID of PeMSD7-L in the near future. For your reference, we also have visualized the distribution of station deployment in our paper (Fig.3).

@xumingxingsjtu
Copy link

xumingxingsjtu commented Dec 5, 2019 via email

@lixus7
Copy link

lixus7 commented Jun 11, 2021

Hi @xumingxingsjtu

Thank you for your feedback. I'm afraid that more details are needed to locate and determine where is the problem in your specific case.

  1. what is your current running environment, such as OS, tensorflow version, GPU, etc.
  2. how do you preprocess the raw data in PeMS(M/L). In particular, which sensor stations you selected for the experiment and how do you construct the corresponding adjacent matrix.
  3. how do you pick up hyper-parameters, such as learning rate, optimizer, batch size, training epochs, etc.
  4. how do you measure the performance of the model, which metric do you apply for evaluation and how do you calculate?
  5. when you refer to 'stable', could you give more details about the model performance?

As for your questions,

  1. The results of our paper are the mean value.
  2. We repeatedly train and test for 5 times to obtain the final result. And we use a grid search strategy to find the best set of parameters.
  3. As for the T-test, we compare the targeted model GCGRU with our model STGCN under the time settings in 15, 30 and 45 mins respectively. For each model at each time slot, we use the output of 5 train/test models to calculate the Mean, Std. Deviation and Std. Error Mean. Then, we conduct independent samples test for both models through SPASS. The report shows that our model outperforms GCGRU with statistical significance (two-tailed T-test, α = 0.01, P < 0.01).

If you have further question, please do not hesitate to contact me. You can also reach me through my email if you would like to share the detailed training issues.

Thank you, ht

Excuse me, can i ask about the location information of PeMSD7? I have found anywhere in the last half year. Could you please offer this data? Thank You!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants