Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how much GPU memory does the next QA need? #7

Open
vacuum-cup opened this issue May 12, 2022 · 6 comments
Open

how much GPU memory does the next QA need? #7

vacuum-cup opened this issue May 12, 2022 · 6 comments

Comments

@vacuum-cup
Copy link

No description provided.

@doc-doc
Copy link
Owner

doc-doc commented May 12, 2022

Thanks for the question. It needs about 24G for training of the model with batch size 64 with 8 clips for each video, whereas 8G is enough for inference. If you want to train with 16 clips, you need to change batchsize to 32.

@vacuum-cup
Copy link
Author

Thank you for your reply.

@vacuum-cup
Copy link
Author

Well,Could you offer the pre-trained bert model? Thanks.

@LemonQC
Copy link

LemonQC commented May 16, 2022

Well, I also need this

@doc-doc
Copy link
Owner

doc-doc commented May 16, 2022

Hi, please find the code and mode for BERT finetuning/feature extraction. You should launch a new issue with a proper title, otherwise I may slow in finding your questions..

@LemonQC
Copy link

LemonQC commented May 16, 2022

ok, many thanks. Is the model in nextqa for question features the final model? Or I still need to train? I just utilize the model to extract question feature while the results drop significantly by nearly 5 points.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants