-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference time Multi GPUs for long video out of memory issue. #51
Comments
Hi @feiwu77777, |
Thank you, that works for me! |
Hi, Thank you so much for sharing your interesting work! At present, I am trying to understand about online_demo code. In particular, the variable "window_frames" will be appended continuously as long as new frames are fed. In this case, is it possible if I discard previous frames and keep the number of frames in "window_frames" to be fixed, |
Hi @pvtoan, this variable exists for one reason - to make it possible to visualize the output using the existing tools in this repository. You can safely discard the previous frames. |
Hi @nikitakaraevv, Thank you for your clear answer. Btw, when I already "discarded" previous frames of variable "window_frames" (it means the input of function "CoTrackerOnlinePredictor" has a fixed size) However, the output including "pred_tracks" and "pred_visibility" are still appended continuously.
pred_tracks = tracks * tracks.new_tensor([(W - 1) / (self.interp_shape[1] - 1), (H - 1) / (self.interp_shape[0] - 1),]) |
Hi @pvtoan, yes, this is done for the same reason, and we should probably fix it.
and make sure it doesn't break anything.
|
Hi @nikitakaraevv , I tried to update with the two commands in "cotracker.py", as you showed However, I got the following errors Could you please help me fix this issue? |
Hi @nikitakaraevv , If you have time, please take a look at my issue. Thank you for your help! |
Hi @pvtoan, this will take some time, I'll try to take a look over the weekend! |
Hello, thanks a lot for sharing your work!
I used co-tracker for videos of up to 3min to track a segmentation mask, and the GPU from Colab ran out of memory. I understand I can reduce the grid size for the memory issue, but I wanted to know if it's possible to run the inference on multiple GPUs at the same time in parallel?
I'm not sure it would work as I would cut the video into 3 segments of 1min each and give the 3 segments to 3 GPUs for example. But I guess the tracking algorithm needs the whole video to work?
I saw in the colab notebook that you can track manually selected points, for my problem can I apply co-tracker on the first 1min of the video, save the tracked points of the last frame, and start tracking those points from minute 1 to minute 2?
The text was updated successfully, but these errors were encountered: