-
Notifications
You must be signed in to change notification settings - Fork 741
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Index-1.9B-Chat Lora 微调合并模型出现ValueError: Can't find 'adapter_config.json' at './output/Index-1.9B-Chat-lora/checkpoint-600' #175
Comments
是路径错误,还是没有训练到600step啊 |
我Lora微调Qwen2,加载时也遇到这个问题,改了transformer和peft的版本也都没有adapter_config.json,头疼 |
我也有同样的问题 |
我目前解决了,官方例程里的部分改成下面: 加载tokenizertokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) 加载模型model = AutoModelForCausalLM.from_pretrained(lora_path, device_map="auto",torch_dtype=torch.bfloat16, trust_remote_code=True).eval() 应该是高版本的transformers和peft在lora时直接合并了,不需要再单独加载lora的checkpoint。 |
我理解应该是把Lora的部分直接合并,base model参数不变,只是加载方式不同。我这边fine-tune之后,用这种加载方式确实有效能提升说明应该有用。看其他有的github的解决方案类似,可能Qianwen2的训练版本比较高,所以降低版本的方法不大行。 |
理解了,感谢老哥回复答疑 |
客气啦 |
采用微调脚本微调出现错误:
ValueError: Can't find 'adapter_config.json' at './output/Index-1.9B-Chat-lora/checkpoint-600'
The text was updated successfully, but these errors were encountered: