You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey Marcin, loving the concept of this application as it pretty much ticks every box I can think of. You could give Tony's JARVIS a run for his money with continued TLC.
Now I've buttered you up, the burning question I've been wrestling with for two hours or so... could you provide some concise steps to detail how I can connect py-gpt to a local Ollama instance? I wouldn't usually ask but I have been reading the docs, exploring the code (no expert but happy to learn) and the options in the application but I couldn't get it to work, presuming I've interped the docs right and it is possible.
So far I've installed Ollama and pulled a model for testing.
I'm running Ollama as a server and can see the model being served via the command ollama ps
Run py-gpt
Opened Config -> Models
Created a new model and specified the alphanumeric Model ID, provided a Name, entered langchain as the model and set the model provider to ollama.
When I save changes and select the Langchain Mode and the Model I created I then get this error:
Exception: Serializable.__init__() takes 1 positional argument but 2 were given
Type: TypeErrorMessage: Serializable.__init__() takes 1 positional argument but 2 were given
Traceback: File "core\chain\__init__.py", line 60, in call
File "core\chain\chat.py", line 69, in send
File "core\chain\chat.py", line 62, in send
File "provider\llms\ollama.py", line 46, in chat
Any advice would be greatly appreciated.
The text was updated successfully, but these errors were encountered:
Hey Marcin, loving the concept of this application as it pretty much ticks every box I can think of. You could give Tony's JARVIS a run for his money with continued TLC.
Now I've buttered you up, the burning question I've been wrestling with for two hours or so... could you provide some concise steps to detail how I can connect py-gpt to a local Ollama instance? I wouldn't usually ask but I have been reading the docs, exploring the code (no expert but happy to learn) and the options in the application but I couldn't get it to work, presuming I've interped the docs right and it is possible.
So far I've installed Ollama and pulled a model for testing.
I'm running Ollama as a server and can see the model being served via the command
ollama ps
Run py-gpt
Opened Config -> Models
Created a new model and specified the alphanumeric Model ID, provided a Name, entered langchain as the model and set the model provider to ollama.
When I save changes and select the Langchain Mode and the Model I created I then get this error:
Any advice would be greatly appreciated.
The text was updated successfully, but these errors were encountered: