-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to run the online tracker for every frame? #56
Comments
Hi @mizeller, yes, the current online model indeed only works with 4-frame windows. There is going to be a lag of 3 frames, and we don't have a solution for this yet. |
@nikitakaraevv Thank you for the speedy reply! I guess I'll continue with the regular CoTrackerPredictor if there is no easy "hacky" solution in that case :-) |
(BTW: the variables W and H are not initialised in the case of if is_first_step:
self.model.init_video_online_processing()
if queries is not None:
B, N, D = queries.shape
assert D == 3
queries = queries.clone()
queries[:, :, 1:] *= queries.new_tensor(
[
(self.interp_shape[1] - 1) / (W - 1), # here
(self.interp_shape[0] - 1) / (H - 1), # and here
.... |
Thanks! Fixed it: fac2798 |
In the online demo, the tracker runs on every 4th step because of this if-clause in online_demo.py:
Is it possible to run the tracker on every frame somehow?
I tried adjusting the
model.window_len
which ultimately determinesmodel.step
, but could not figure out a working solution somehow.The text was updated successfully, but these errors were encountered: