You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have read the README and searched the existing issues.
System Info
I find that I cannot run the generate() function to inference inputs using the converted model, can you help me?
Here is the error:
Reproduction
from transformers import AutoTokenizer, LlamaForCausalLM model = LlamaMoDForCausalLM.from_pretrained("LLaMA-Factory/saves/llama2-7b-mod/full/sft_full_0") tokenizer = AutoTokenizer.from_pretrained("LLaMA-Factory/saves/llama2-7b-mod/full/sft_full_0") prompt = "Hey, are you conscious? Can you talk to me?" inputs = tokenizer(prompt, return_tensors="pt") generate_ids = model.generate(inputs.input_ids) tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
Expected behavior
Run inference code using trained MoD model (sft based on llama2_mod)
Others
No response
The text was updated successfully, but these errors were encountered:
I'll try to reproduce it this days, please send you tranformers and mod package verison.
Also what model did you started with? you said llama2_mod but I can't find it on HF
Reminder
System Info
I find that I cannot run the generate() function to inference inputs using the converted model, can you help me?
Here is the error:
Reproduction
from transformers import AutoTokenizer, LlamaForCausalLM
model = LlamaMoDForCausalLM.from_pretrained("LLaMA-Factory/saves/llama2-7b-mod/full/sft_full_0")
tokenizer = AutoTokenizer.from_pretrained("LLaMA-Factory/saves/llama2-7b-mod/full/sft_full_0")
prompt = "Hey, are you conscious? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt")
generate_ids = model.generate(inputs.input_ids)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
Expected behavior
Run inference code using trained MoD model (sft based on llama2_mod)
![image](https://private-user-images.githubusercontent.com/55663065/336376309-5d7c87bb-d25e-42ff-8994-3b5259046bca.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTk5ODU5MDgsIm5iZiI6MTcxOTk4NTYwOCwicGF0aCI6Ii81NTY2MzA2NS8zMzYzNzYzMDktNWQ3Yzg3YmItZDI1ZS00MmZmLTg5OTQtM2I1MjU5MDQ2YmNhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MDMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzAzVDA1NDY0OFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTQ4M2JlZGNlZDg5MDVmZTQ0NzY1ZTk1ZjhkZDU5ZGIyOTNjMzY5NTA3NjM2M2UxMDQ4ZGFhMDg5MDFkNDc2ZGQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.6kRUE5JDdUnXWzjbyVSWK_W3dgaWRaxOX2Pk2P_CAU8)
Others
No response
The text was updated successfully, but these errors were encountered: