-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The testing procedure related to C-LoRA. #4
Comments
Additionally, I have a small suggestion regarding the testing process. Currently, you are using normally distributed class labels to evaluate the expected FID on the current task. However, this approach might not fully capture the class-guided generative capability. Therefore, why not generate a certain number of samples for each class directly and calculate the expected FID based on these samples? |
Thanks for your comment! The code for C-LoRA is incorrect. The suggestion for testing FID makes sense and we'd like to try it. |
Hello, I am very interested in this direction and thrilled to have encountered like-minded individuals. As such, I have been making some optimizations to your code and have started a new fork. If you are interested, feel free to take a look. I will continue making modifications to achieve a more general version. If you don't mind, I will submit a pull request (PR) once it's ready. Thank you again for your valuable contribution. |
Cool! I will take a look. We're very excited that you're interested in our work. |
Excellent work! However, I have some concerns regarding the testing results of C-LoRA method. In the
sample
function oflora_ddim.py
, you obtain the specific task index by executingtask_id = (labels[0] // self.data_args.class_num).item()
and then load the corresponding PEFT-derived parameters based on this task ID. By doing so, isn’t this essentially an ensemble scenario? The only difference is that the model is fine-tuned using PEFT rather than full fine-tuning, which might explain why the C-LoRA results are closer to those of an ensemble. Drawing an analogy to the LoRA-based continual classification scenario, during the testing phase, we cannot directly access task identifiers. Instead, we need to obtain pseudo-task identifiers through some means or directly use the results generated by all LoRA parameters.The text was updated successfully, but these errors were encountered: