Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: please support google LLM #601

Open
shixiao11 opened this issue Jan 16, 2024 · 2 comments
Open

[Feature]: please support google LLM #601

shixiao11 opened this issue Jan 16, 2024 · 2 comments

Comments

@shixiao11
Copy link

Is your feature request related to a problem? Please describe.

Hello team,
My team was working on the Gen AI project and all the projects are base on the google cloud. So is it possible to make GPTCache integrated with google LLM(gemini or bisontext)

Describe the solution you'd like.

  • 1, support google LLM
    1. support google embedding function
  • 3, support pinecone as the vector database

Describe an alternate solution.

No response

Anything else? (Additional Context)

No response

@varunmehra5
Copy link

+1

@SimFG
Copy link
Collaborator

SimFG commented Jun 21, 2024

The number of large models is growing explosively, and I think it may not be meaningful to keep adding models. Maybe you can try to use the get and set API in gptcache, demo code: https://github.com/zilliztech/GPTCache/blob/main/examples/adapter/api.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants