Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API key required even with the local Ollama LLM #70

Closed
8HoLoN opened this issue Jun 2, 2024 · 3 comments
Closed

API key required even with the local Ollama LLM #70

8HoLoN opened this issue Jun 2, 2024 · 3 comments
Assignees

Comments

@8HoLoN
Copy link

8HoLoN commented Jun 2, 2024

Hi, The usage of the local Ollama LLM should not use the open ai or azure openai LLM
But currently the error Error: OpenAI or Azure OpenAI API key or Token Provider not found is thrown while attempting to run the Ollama example.

@adhityan
Copy link
Collaborator

adhityan commented Jun 2, 2024

This is actually not a bug. Basically, you need two things to run a RAG stack -

  1. LLM
  2. Embedding model

In your case, you are using Ollama as the LLM and by default (unless you specify otherwise), the library uses OpenAI's LargeEmedding as its embedding model.

The error you see is coming from the large embedding model unable to reach out to OpenAI. Right now, there is no support for local embedding models via Ollama (only Ollama based LLMs are supported) but there is a plan to add it soon. There are other embedding models to choose from though; refer the documentation on that.

@8HoLoN
Copy link
Author

8HoLoN commented Jun 2, 2024

the embedding model is .setEmbeddingModel(new AdaEmbeddings())

Is there a pair of LLM/Embedding model that do not require an api key at all ?

Edit: ok I read all embedding-models and there is no embedding model that do not require an api key.

Thx. Let me know when embedJs will provide a full RAG stack without the need of a api key/fully local :)

@adhityan
Copy link
Collaborator

adhityan commented Jun 3, 2024

Will do.

Currently the roadmap includes adding support for Ollama based local embedding models but it is not expected to not be available before Q3 start. If you are interested, you can contribute a PR with a non API Key based embedded model and I will prioritize merging it.

@adhityan adhityan closed this as completed Jun 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants