You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Nice try!
I have a concern about your implementation. Basically, your current implementation have to (partially) store previous task samples in the memory, which could be problematic if the class number is big (for example 10k class).
Do you have a fair comparison with an alternative training process? I.e., shuffling the data of all the tasks, and training the network, and test on all 20 tasks? If we can see the benefits from RMA still, then we can say RMA is effective.
Chunlei
The text was updated successfully, but these errors were encountered:
Hi,
Nice try!
I have a concern about your implementation. Basically, your current implementation have to (partially) store previous task samples in the memory, which could be problematic if the class number is big (for example 10k class).
Do you have a fair comparison with an alternative training process? I.e., shuffling the data of all the tasks, and training the network, and test on all 20 tasks? If we can see the benefits from RMA still, then we can say RMA is effective.
Chunlei
The text was updated successfully, but these errors were encountered: