Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

8张A800(80G)微调Cogagent依然报错: CUDA out of memory #450

Open
1 of 2 tasks
GuoXu-booo opened this issue Apr 10, 2024 · 4 comments
Open
1 of 2 tasks

8张A800(80G)微调Cogagent依然报错: CUDA out of memory #450

GuoXu-booo opened this issue Apr 10, 2024 · 4 comments
Assignees

Comments

@GuoXu-booo
Copy link

System Info / 系統信息

torch 2.0.1+cu118
torchaudio 2.0.2+cu118
torchvision 0.15.2+cu118
cuda 11.8

Who can help? / 谁可以帮助到您?

1

Information / 问题信息

  • The official example scripts / 官方的示例脚本
  • My own modified scripts / 我自己修改的脚本和任务

Reproduction / 复现过程

按照官方提供的脚本执行:bash finetune_cogagent_lora.sh 模型文件使用的是sat模型权重。

Expected behavior / 期待表现

官方提到微调需要的配置是4张A100即可

@zRzRzRzRzRzRzR zRzRzRzRzRzRzR self-assigned this Apr 15, 2024
@zRzRzRzRzRzRzR
Copy link
Collaborator

跟几张卡没有关系,因为是数据并行,你只要确定单张卡容量能装下配置文件下(bs=1)的一个模型

@zhanghaobucunzai
Copy link

我也遇到了,除了train_micro_batch_size_per_gpu参数改为1,还有什么版本减少内存呢?

@WeiminLee
Copy link

不要使用deepseek 的分布式,直接运行finetune代码,from_pretrain()中设置 device-map=”auto“。 其实就是改成 单线程多GPU形式微调。自动布满全部GPU

@WeiminLee
Copy link

或者改成deepspeed 的PP 模式 流水线并行。 不过流水线需要你自己拆解模型为多个层;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants