Replies: 1 comment 1 reply
-
LlamaTale runs independently of your LLM backend. So the answer is "yes", but how you do it depends on how your LLM is served. How are you running your LLM today? I mean, is it koboldcpp, oobabooga or something else? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
i would like to know if is possible to use llamatale with openblas and how to do it
Beta Was this translation helpful? Give feedback.
All reactions