-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Be able to cache embeddings and load them #946
Comments
Hey @orionw , I would like to try this if possible! |
Awesome @tenzu15! It would be great to be able to pass the two flags in the This would need to be changed in the RetrievalEvaluator class for now. If it's useful for other tasks, we can implement it there also. Also cc'ing @KennethEnevoldsen who may have opinions on where this should be added/what the names should be. But feel free to start @tenzu15. If you have any questions feel free to make a draft PR and cc me! |
@orionw, wouldn't it be better to implement a more general model wrapping for this so that it works for all tasks?
|
There's some background discussion related to the topic from #354 (comment) as well. |
+1 @KennethEnevoldsen, I think a wrapper is a great idea and even simpler to implement. |
For most users, being able to cache their embedded docs and/or provide a cached embedding file is probably overkill.
However, there are many situations where it would be helpful to have an option to cache them. For example, experiments where you alter the query/document set to for speedups (as I'm doing now) or if you're testing the effect of different prefixes/instructions over the same dataset.
I typically use
pyserini
for caching the index so that we can quickly search over it later, but that doesn't integrate nicely withmteb
. I think it would fairly straightforward to implement this: (1) we need to take in a flag of whether to cache the embeddings, cache them to a file that corresponds to the dataset and model name and (2) provide an option to read in a cached embedding file.I don't have bandwidth for this right now, but if anyone does it would be an excellent addition.
The text was updated successfully, but these errors were encountered: