Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added support for custom openai api #43

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

sebaxzero
Copy link

Support for a custom openai api, using default llama_index.llms.openai (no requirement change)

backend:

  • created "CUSTOM" model mapping with "gpt-4" as its name for llama_index to use 8k context.
  • added elif statement to chat, related queries and validator for ChatModel.CUSTOM

frontend:

  • added Custom API as selectable model.

docker:

enviromental variables:

CUSTOM_HOST=your-custom-host
CUSTOM_API_KEY=your-custom-api-key

.env example for lm-studio server:

CUSTOM_HOST=http://localhost:1234/v1
CUSTOM_API_KEY=local

api-key in most cases is not nedded.

why use custom instead of openai naming?

llama_index uses OPENAI_API_BASE while openai uses OPENAI_BASE_URL
adding the new variable make sure it does not conflict with cloud openai usage.

tested with lm-studio local server using:

  • llama3 8B (also some finetunes versions like Herme 2 Theta)
  • mistral v0.3 7B
  • phi 3 mini

preview:

Captura de pantalla 2024-06-02 220940

Copy link

vercel bot commented Jun 3, 2024

@sebaxzero is attempting to deploy a commit to the rashadphil's projects Team on Vercel.

A member of the Team first needs to authorize it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant