You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it possible to run CogVLM with the actual openai API (client.chat.completions.create)? If this is not possible, how do we run CogVLM through an A100 server using the openai demo provided in this repo?
Our team looked at the openai_api_request.py, and it isn't clear to us how we would keep CogVLM running indefinitely on the A100 server so that we can communicate with CogVLM via the baseurl endpoint we would create for the server. If someone could please explain this, it would be well appreciated.
The text was updated successfully, but these errors were encountered:
Is it possible to run CogVLM with the actual openai API (
client.chat.completions.create
)? If this is not possible, how do we run CogVLM through an A100 server using theopenai demo
provided in this repo?Our team looked at the openai_api_request.py, and it isn't clear to us how we would keep CogVLM running indefinitely on the A100 server so that we can communicate with CogVLM via the baseurl endpoint we would create for the server. If someone could please explain this, it would be well appreciated.
The text was updated successfully, but these errors were encountered: