You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for implementing the pytorch version of l2p!
While running the code on CIFAR100 dataset, I find that for all tasks, only prompt with index 0, 4, 5, 8, 9 will be selected.
However, if the same subset of prompts is selected for all tasks, it will be updated for each task and wouldn't this still cause catatstrophic forgetting? Do you have an idea of why this is happening and why l2p seem to suffer from much less forgetting?
Thank you!
The text was updated successfully, but these errors were encountered:
Even in my experience, prompt selection is very strongly optimized for the first task.
Also, I don't think CIFAR100 is a very good dataset.
No matter how many classes in each task are changed (shuffle), the accuracy does not make much difference, and still only the same prompts are selected.
In addition, I tested all combinations of random selection and fixed order selection, but there was no significant difference in performance.
If you have any additional comments, please feel free to let me know.
Hi!
Thank you for implementing the pytorch version of l2p!
While running the code on CIFAR100 dataset, I find that for all tasks, only prompt with index 0, 4, 5, 8, 9 will be selected.
However, if the same subset of prompts is selected for all tasks, it will be updated for each task and wouldn't this still cause catatstrophic forgetting? Do you have an idea of why this is happening and why l2p seem to suffer from much less forgetting?
Thank you!
The text was updated successfully, but these errors were encountered: