Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

用openai库 请求时,流式请求时缺stream_options={"include_usage": True}的处理,用于计算流式tokens #3998

Open
1 task done
sasicDHH opened this issue May 30, 2024 · 0 comments
Labels
pending This problem is yet to be addressed

Comments

@sasicDHH
Copy link

Reminder

  • I have read the README and searched the existing issues.

Reproduction

response = client.chat.completions.create(
        model=model_name,
        messages=[
            # {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": msg_clean},
        ],
        temperature=0.8,
        top_p=0.8,
        max_tokens=2048,
        stream=True,
        stream_options={"include_usage": True},
    )

参考
建议在llamafactory/api/chat.py 第144行左右create_stream_chat_completion_response 函数下
_create_stream_chat_completion_chunk 对stream_options 处理

Expected behavior

计算steam=True时,通过stream_options={"include_usage": True}, 获取流式tokens
详细请参考

System Info

No response

Others

No response

@hiyouga hiyouga added the pending This problem is yet to be addressed label Jun 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pending This problem is yet to be addressed
Projects
None yet
Development

No branches or pull requests

2 participants