We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
torch 2.0.1+cu118 torchaudio 2.0.2+cu118 torchvision 0.15.2+cu118 cuda 11.8
按照官方提供的脚本执行:bash finetune_cogagent_lora.sh 模型文件使用的是sat模型权重。
官方提到微调需要的配置是4张A100即可
The text was updated successfully, but these errors were encountered:
跟几张卡没有关系,因为是数据并行,你只要确定单张卡容量能装下配置文件下(bs=1)的一个模型
Sorry, something went wrong.
我也遇到了,除了train_micro_batch_size_per_gpu参数改为1,还有什么版本减少内存呢?
不要使用deepseek 的分布式,直接运行finetune代码,from_pretrain()中设置 device-map=”auto“。 其实就是改成 单线程多GPU形式微调。自动布满全部GPU
或者改成deepspeed 的PP 模式 流水线并行。 不过流水线需要你自己拆解模型为多个层;
zRzRzRzRzRzRzR
No branches or pull requests
System Info / 系統信息
torch 2.0.1+cu118
torchaudio 2.0.2+cu118
torchvision 0.15.2+cu118
cuda 11.8
Who can help? / 谁可以帮助到您?
Information / 问题信息
Reproduction / 复现过程
按照官方提供的脚本执行:bash finetune_cogagent_lora.sh 模型文件使用的是sat模型权重。
Expected behavior / 期待表现
官方提到微调需要的配置是4张A100即可
The text was updated successfully, but these errors were encountered: