We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
conda python=3.11
No response
INFO:sat:[RANK 0] > initializing model parallel with size 2 [Language processor version]: chat [rank1]: Traceback (most recent call last): [rank1]: File "/home/tt/CogVLM/basic_demo/cli_demo_sat.py", line 161, in [rank1]: main() [rank1]: File "/home/tt/CogVLM/basic_demo/cli_demo_sat.py", line 56, in main [rank1]: tokenizer = llama2_tokenizer(args.local_tokenizer, signal_type=language_processor_version) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/tt/CogVLM/utils/utils/language.py", line 37, in llama2_tokenizer [rank1]: tokenizer = LlamaTokenizer.from_pretrained(tokenizer_path) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/tt/anaconda3/envs/coga/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2163, in from_pretrained [rank1]: return cls._from_pretrained( [rank1]: ^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/tt/anaconda3/envs/coga/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2397, in _from_pretrained [rank1]: tokenizer = cls(*init_inputs, **init_kwargs) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/tt/anaconda3/envs/coga/lib/python3.11/site-packages/transformers/models/llama/tokenization_llama.py", line 171, in init [rank1]: self.sp_model = self.get_spm_processor(kwargs.pop("from_slow", False)) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/tt/anaconda3/envs/coga/lib/python3.11/site-packages/transformers/models/llama/tokenization_llama.py", line 204, in get_spm_processor [rank1]: model = model_pb2.ModelProto.FromString(sp_model) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: google.protobuf.message.DecodeError: Error parsing message
我是在modelscope上下载的vicuna-7b-v1.5
The text was updated successfully, but these errors were encountered:
No branches or pull requests
System Info / 系統信息
conda python=3.11
Who can help? / 谁可以帮助到您?
No response
Information / 问题信息
Reproduction / 复现过程
INFO:sat:[RANK 0] > initializing model parallel with size 2
[Language processor version]: chat
[rank1]: Traceback (most recent call last):
[rank1]: File "/home/tt/CogVLM/basic_demo/cli_demo_sat.py", line 161, in
[rank1]: main()
[rank1]: File "/home/tt/CogVLM/basic_demo/cli_demo_sat.py", line 56, in main
[rank1]: tokenizer = llama2_tokenizer(args.local_tokenizer, signal_type=language_processor_version)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tt/CogVLM/utils/utils/language.py", line 37, in llama2_tokenizer
[rank1]: tokenizer = LlamaTokenizer.from_pretrained(tokenizer_path)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tt/anaconda3/envs/coga/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2163, in from_pretrained
[rank1]: return cls._from_pretrained(
[rank1]: ^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tt/anaconda3/envs/coga/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2397, in _from_pretrained
[rank1]: tokenizer = cls(*init_inputs, **init_kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tt/anaconda3/envs/coga/lib/python3.11/site-packages/transformers/models/llama/tokenization_llama.py", line 171, in init
[rank1]: self.sp_model = self.get_spm_processor(kwargs.pop("from_slow", False))
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/home/tt/anaconda3/envs/coga/lib/python3.11/site-packages/transformers/models/llama/tokenization_llama.py", line 204, in get_spm_processor
[rank1]: model = model_pb2.ModelProto.FromString(sp_model)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: google.protobuf.message.DecodeError: Error parsing message
Expected behavior / 期待表现
我是在modelscope上下载的vicuna-7b-v1.5
The text was updated successfully, but these errors were encountered: