Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question regarding prompt selection #5

Open
HaoranChen opened this issue Jan 29, 2023 · 1 comment
Open

Question regarding prompt selection #5

HaoranChen opened this issue Jan 29, 2023 · 1 comment

Comments

@HaoranChen
Copy link

Hi!

Thank you for implementing the pytorch version of l2p!
While running the code on CIFAR100 dataset, I find that for all tasks, only prompt with index 0, 4, 5, 8, 9 will be selected.
However, if the same subset of prompts is selected for all tasks, it will be updated for each task and wouldn't this still cause catatstrophic forgetting? Do you have an idea of why this is happening and why l2p seem to suffer from much less forgetting?

Thank you!

@JH-LEE-KR
Copy link
Owner

Hi,
thanks for your comment.

That's right, if you keep updating the same prompt, catatstrophic forgetting will occur.

Please refer to this and this.

Even in my experience, prompt selection is very strongly optimized for the first task.
Also, I don't think CIFAR100 is a very good dataset.
No matter how many classes in each task are changed (shuffle), the accuracy does not make much difference, and still only the same prompts are selected.
In addition, I tested all combinations of random selection and fixed order selection, but there was no significant difference in performance.

If you have any additional comments, please feel free to let me know.

Best,
Jaeho Lee.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants