Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

加载报错 #501

Open
1 of 2 tasks
itbithubman opened this issue Jun 29, 2024 · 0 comments
Open
1 of 2 tasks

加载报错 #501

itbithubman opened this issue Jun 29, 2024 · 0 comments

Comments

@itbithubman
Copy link

System Info / 系統信息

conda python=3.11

Who can help? / 谁可以帮助到您?

No response

Information / 问题信息

  • The official example scripts / 官方的示例脚本
  • My own modified scripts / 我自己修改的脚本和任务

Reproduction / 复现过程

INFO:sat:[RANK 0] > initializing model parallel with size 2
[Language processor version]: chat
[rank1]: Traceback (most recent call last):
[rank1]: File "/home/tt/CogVLM/basic_demo/cli_demo_sat.py", line 161, in
[rank1]: main()
[rank1]: File "/home/tt/CogVLM/basic_demo/cli_demo_sat.py", line 56, in main
[rank1]: tokenizer = llama2_tokenizer(args.local_tokenizer, signal_type=language_processor_version)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tt/CogVLM/utils/utils/language.py", line 37, in llama2_tokenizer
[rank1]: tokenizer = LlamaTokenizer.from_pretrained(tokenizer_path)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tt/anaconda3/envs/coga/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2163, in from_pretrained
[rank1]: return cls._from_pretrained(
[rank1]: ^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tt/anaconda3/envs/coga/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2397, in _from_pretrained
[rank1]: tokenizer = cls(*init_inputs, **init_kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tt/anaconda3/envs/coga/lib/python3.11/site-packages/transformers/models/llama/tokenization_llama.py", line 171, in init
[rank1]: self.sp_model = self.get_spm_processor(kwargs.pop("from_slow", False))
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tt/anaconda3/envs/coga/lib/python3.11/site-packages/transformers/models/llama/tokenization_llama.py", line 204, in get_spm_processor
[rank1]: model = model_pb2.ModelProto.FromString(sp_model)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: google.protobuf.message.DecodeError: Error parsing message

Expected behavior / 期待表现

我是在modelscope上下载的vicuna-7b-v1.5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant