You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have migrated the CogVLM-chat-hf to mindspore and found the modle works well when the input including image and text, but if there is only text query without image, the performance is not so good, it may relate to this issue I guess.
I have migrated the CogVLM-chat-hf to mindspore and found the modle works well when the input including image and text, but if there is only text query without image, the performance is not so good, it may relate to this issue I guess.
@antigone660 In text-only mode, the prompt template is different. Did you use the following prompt for text-only query?
text_only_template="A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:"
In my case, text-only mode works well regardless of this issue
While CogVLM is trained, LM weights are fronzen.
From my observation however, the LM weights of cogvlm are different with Vicuna
Vicuna: https://huggingface.co/lmsys/vicuna-7b-v1.5/tree/main
CogVLM: cogvlm-chat-v1.1 (both from HF or SAT)
Can I ask why or the proper source of the language model?
The text was updated successfully, but these errors were encountered: