Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 7.78 GiB total capacity; 4.98 GiB already allocated; 943.19 MiB free; 5.06 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF #137

Open
ami012003 opened this issue May 13, 2024 · 0 comments

Comments

@ami012003
Copy link

(tracking) workstation@workstation-HP-Z4-G4-Workstation:~/Track-Anything$ python app.py --device cuda:0
Initializing BaseSegmenter to cuda:0
Hyperparameters read from the model weights: C^k=64, C^v=512, C^h=64
Single object mode: False
load pretrained SPyNet...
Loads checkpoint by http backend from path: https://download.openmmlab.com/mmediting/restorers/basicvsr/spynet_20210409-c6c1bd09.pth
Running on local URL: http://0.0.0.0:12212

To create a public link, set share=True in launch().
Traceback (most recent call last):
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/gradio/queueing.py", line 527, in process_events
response = await route_utils.call_process_api(
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/gradio/route_utils.py", line 270, in call_process_api
output = await app.get_blocks().process_api(
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/gradio/blocks.py", line 1887, in process_api
result = await self.call_function(
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/gradio/blocks.py", line 1472, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/gradio/utils.py", line 808, in wrapper
response = f(*args, **kwargs)
File "/home/workstation/Track-Anything/app.py", line 116, in get_frames_from_video
model.samcontroler.sam_controler.set_image(video_state["origin_images"][0])
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/workstation/Track-Anything/tools/base_segmenter.py", line 38, in set_image
self.predictor.set_image(image)
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/segment_anything/predictor.py", line 60, in set_image
self.set_torch_image(input_image_torch, image.shape[:2])
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/segment_anything/predictor.py", line 89, in set_torch_image
self.features = self.model.image_encoder(input_image)
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/segment_anything/modeling/image_encoder.py", line 112, in forward
x = blk(x)
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/segment_anything/modeling/image_encoder.py", line 174, in forward
x = self.attn(x)
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/segment_anything/modeling/image_encoder.py", line 234, in forward
attn = add_decomposed_rel_pos(attn, q, self.rel_pos_h, self.rel_pos_w, (H, W), (H, W))
File "/home/workstation/anaconda3/envs/tracking/lib/python3.10/site-packages/segment_anything/modeling/image_encoder.py", line 358, in add_decomposed_rel_pos
attn.view(B, q_h, q_w, k_h, k_w) + rel_h[:, :, :, :, None] + rel_w[:, :, :, None, :]
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 7.78 GiB total capacity; 4.98 GiB already allocated; 943.19 MiB free; 5.06 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

getting this out of memory error continuously, please help.

(tracking) workstation@workstation-HP-Z4-G4-Workstation:~/Track-Anything$ nvidia-smi
Mon May 13 13:32:05 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.171.04 Driver Version: 535.171.04 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3070 Off | 00000000:21:00.0 On | N/A |
| 0% 51C P8 28W / 220W | 296MiB / 8192MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 1192 G /usr/lib/xorg/Xorg 35MiB |
| 0 N/A N/A 2122 G /usr/lib/xorg/Xorg 114MiB |
| 0 N/A N/A 2249 G /usr/bin/gnome-shell 75MiB |
| 0 N/A N/A 3663 G ...seed-version=20240510-050143.234000 39MiB |
| 0 N/A N/A 7019 G ...gnu/webkit2gtk-4.0/WebKitWebProcess 8MiB |
+---------------------------------------------------------------------------------------+

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant