Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The testing procedure related to C-LoRA. #4

Open
Programmergg opened this issue Aug 20, 2024 · 4 comments
Open

The testing procedure related to C-LoRA. #4

Programmergg opened this issue Aug 20, 2024 · 4 comments

Comments

@Programmergg
Copy link

Excellent work! However, I have some concerns regarding the testing results of C-LoRA method. In the sample function of lora_ddim.py, you obtain the specific task index by executing task_id = (labels[0] // self.data_args.class_num).item() and then load the corresponding PEFT-derived parameters based on this task ID. By doing so, isn’t this essentially an ensemble scenario? The only difference is that the model is fine-tuned using PEFT rather than full fine-tuning, which might explain why the C-LoRA results are closer to those of an ensemble. Drawing an analogy to the LoRA-based continual classification scenario, during the testing phase, we cannot directly access task identifiers. Instead, we need to obtain pseudo-task identifiers through some means or directly use the results generated by all LoRA parameters.

@Programmergg
Copy link
Author

Additionally, I have a small suggestion regarding the testing process. Currently, you are using normally distributed class labels to evaluate the expected FID on the current task. However, this approach might not fully capture the class-guided generative capability. Therefore, why not generate a certain number of samples for each class directly and calculate the expected FID based on these samples?

@linhaowei1
Copy link
Owner

Thanks for your comment! The code for C-LoRA is incorrect. The suggestion for testing FID makes sense and we'd like to try it.
We will improve this work in September (add more benchmarks and baselines). And we welcome PRs if you'd like to.

@Programmergg
Copy link
Author

Hello, I am very interested in this direction and thrilled to have encountered like-minded individuals. As such, I have been making some optimizations to your code and have started a new fork. If you are interested, feel free to take a look. I will continue making modifications to achieve a more general version. If you don't mind, I will submit a pull request (PR) once it's ready. Thank you again for your valuable contribution.

@linhaowei1
Copy link
Owner

Hello, I am very interested in this direction and thrilled to have encountered like-minded individuals. As such, I have been making some optimizations to your code and have started a new fork. If you are interested, feel free to take a look. I will continue making modifications to achieve a more general version. If you don't mind, I will submit a pull request (PR) once it's ready. Thank you again for your valuable contribution.

Cool! I will take a look. We're very excited that you're interested in our work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants