Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Processing stucks after ~1000 images at BA #29

Open
YaroslavShchekaturov opened this issue Jan 4, 2024 · 1 comment
Open

Processing stucks after ~1000 images at BA #29

YaroslavShchekaturov opened this issue Jan 4, 2024 · 1 comment

Comments

@YaroslavShchekaturov
Copy link

HI!

Thank you very much for a wonderfull job you did! I've tried to run a reconstruction
python ./main.py --config superpoint+lightglue --images images --outs out --strategy sequential --overlap 2 --force
and everything is fine when I run it for a dataset with <1000 images. However, I run it for a dataset with 2000 images it takes significantly more time for BA, especially after 1000 images.
I tried to lower the quality here "quality": Quality.HIGH -> "quality": Quality.LOW but got
2024-01-04 16:08:21 | [INFO ] Matching features...
2024-01-04 16:08:21 | [INFO ]
3%|█▉ | 114/4545 [00:37<23:58, 3.08it/s]
Traceback (most recent call last):
File "C:\Users\yaroslav\Desktop\deep-image-matching\main.py", line 405, in
main()
File "C:\Users\yaroslav\Desktop\deep-image-matching\main.py", line 238, in main
match_path = img_matching.match_pairs(feature_path)
File "C:\Users\yaroslav\Desktop\deep-image-matching\src\deep_image_matching\image_matching.py", line 311, in match_pairs
matches = self._matcher.match(
File "C:\Users\yaroslav\Desktop\deep-image-matching\src\deep_image_matching\matchers\matcher_base.py", line 279, in match
self._matches = self._match_by_tile(
File "C:\Users\yaroslav\Desktop\deep-image-matching\src\deep_image_matching\matchers\matcher_base.py", line 380, in _match_by_tile
correspondences = self._match_pairs(feats0_tile, feats1_tile)
File "C:\Users\yaroslav\anaconda3\envs\slam\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\yaroslav\Desktop\deep-image-matching\src\deep_image_matching\matchers\lightglue.py", line 71, in _match_pairs
match_res = self._matcher({"image0": feats0, "image1": feats1})
File "C:\Users\yaroslav\anaconda3\envs\slam\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\yaroslav\anaconda3\envs\slam\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\yaroslav\Desktop\deep-image-matching\src\deep_image_matching\thirdparty\LightGlue\lightglue\lightglue.py", line 463, in forward
return self._forward(data)
File "C:\Users\yaroslav\Desktop\deep-image-matching\src\deep_image_matching\thirdparty\LightGlue\lightglue\lightglue.py", line 529, in _forward
if self.check_if_stop(token0[..., :m, :], token1[..., :n, :], i, m + n):
File "C:\Users\yaroslav\Desktop\deep-image-matching\src\deep_image_matching\thirdparty\LightGlue\lightglue\lightglue.py", line 613, in check_if_stop
confidences = torch.cat([confidences0, confidences1], -1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 1 but got size 0 for tensor number 1 in the list.

@franioli
Copy link
Collaborator

franioli commented Jan 5, 2024

Hi, honestly, we have never tried such a big dataset... But it's of course on our to-do list for generalizing the software.
From what you have written, I believe there are two different kind of problems:

  1. The bundle adjustment is completely done by COLMAP. Therefore it probably take significantly more time just because the number of features may be huge. I suggest reducing the maximum number of features per tile, modifying manually the configuration dictionary for your selected extractor. If you have such a big dataset probably you have also a good overlap and coverage of the images and, therefore, SP+SG extract a really high number of corresponding features that are not even needed for the reconstruction (a high number of features may help in challenging situations such as wide baselines). Anyway, if you have reached this point, the matching seems to work and that's already a great news for us.
  2. The error you have reported is encountered during the matching phase and therefore before the bundle (so this is probably a bug in deep_image_matching).
    I'll have a look and try to replicate the problem as soon as I have a bit of spare time, but in the meanwhile can I ask you some more information on your dataset and processing parameters?
    Which is the size of the images? Which is the time dimension that you are using and how many tiles are you using?
    And are you using an updated version of main branch of the repo, aren't you?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants