Skip to content

Commit

Permalink
v2,0,6
Browse files Browse the repository at this point in the history
- feat: Added support for watsonx.ai
- test: Added simple test for watsonx.ai
- test: Added simple test for cloudflare ai
- doc: Added APIKEYS and USAGE references for watsonx.ai
  • Loading branch information
samestrin committed Jun 25, 2024
1 parent fdc97f1 commit b583582
Show file tree
Hide file tree
Showing 9 changed files with 359 additions and 7 deletions.
10 changes: 7 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@

[![Star on GitHub](https://img.shields.io/github/stars/samestrin/llm-interface?style=social)](https://github.com/samestrin/llm-interface/stargazers) [![Fork on GitHub](https://img.shields.io/github/forks/samestrin/llm-interface?style=social)](https://github.com/samestrin/llm-interface/network/members) [![Watch on GitHub](https://img.shields.io/github/watchers/samestrin/llm-interface?style=social)](https://github.com/samestrin/llm-interface/watchers)

![Version 2.0.5](https://img.shields.io/badge/Version-2.0.5-blue) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Built with Node.js](https://img.shields.io/badge/Built%20with-Node.js-green)](https://nodejs.org/)
![Version 2.0.6](https://img.shields.io/badge/Version-2.0.6-blue) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Built with Node.js](https://img.shields.io/badge/Built%20with-Node.js-green)](https://nodejs.org/)

## Introduction

The LLM Interface project is a versatile and comprehensive wrapper designed to interact with multiple Large Language Model (LLM) APIs. It simplifies integrating various LLM providers, including **OpenAI, AI21 Studio, Anthropic, Cloudflare AI, Cohere, Fireworks AI, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Perplexity, Reka AI, and LLaMA.cpp**, into your applications. This project aims to provide a simplified and unified interface for sending messages and receiving responses from different LLM services, making it easier for developers to work with multiple LLMs without worrying about the specific intricacies of each API.
LLM Interface project is a versatile and comprehensive wrapper designed to interact with multiple Large Language Model (LLM) APIs. It simplifies integrating various LLM providers, including **OpenAI, AI21 Studio, Anthropic, Cloudflare AI, Cohere, Fireworks AI, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Perplexity, Reka AI, watsonx.ai, and LLaMA.cpp**, into your applications. This project aims to provide a simplified and unified interface for sending messages and receiving responses from different LLM services, making it easier for developers to work with multiple LLMs without worrying about the specific intricacies of each API.

## Features

- **Unified Interface**: `LLMInterfaceSendMessage` is a single, consistent interface to interact with fourteen different LLM APIs.
- **Unified Interface**: `LLMInterfaceSendMessage` is a single, consistent interface to interact with fifteen different LLM APIs.
- **Dynamic Module Loading**: Automatically loads and manages different LLM LLMInterfaces.
- **Error Handling**: Robust error handling mechanisms to ensure reliable API interactions.
- **Extensible**: Easily extendable to support additional LLM providers as needed.
Expand All @@ -21,6 +21,10 @@ The LLM Interface project is a versatile and comprehensive wrapper designed to i

## Updates

**v2.0.6**

- **New LLM Providers**: Added support for watsonx.ai

**v2.0.3**

- **New LLM Providers Functions**: `LLMInterface.getAllModelNames()` and `LLMInterface.getModelConfigValue(provider, configValueKey)`.
Expand Down
10 changes: 10 additions & 0 deletions docs/APIKEYS.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,16 @@ The Reka AI API requires a credit card, but currently comes with a $5 credit.

- https://platform.reka.ai/apikeys

## watsonx.ai

The watsonx.ai API is a commercial service, but it offers a free tier of service without requiring a credit card.

- https://cloud.ibm.com/iam/apikeys

You will also need to setup a space and get the space id:

https://dataplatform.cloud.ibm.com/ml-runtime/spaces/create-space

## LLaMA.cpp

Instead of an API key, you'll need a URL to use LLaMA.cpp. This is provided by LLaMA.cpp HTTP Server.
Expand Down
70 changes: 68 additions & 2 deletions docs/USAGE.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ The following guide was created to help you use `llm-interface` in your project.
- [OpenAI: Simple Text Prompt, Default Model (Example 1)](#openai-simple-text-prompt-default-model-example-1)
- [Gemini: Simple Text Prompt, Default Model, Cached (Example 2)](#gemini-simple-text-prompt-default-model-cached-example-2)
- [Groq: Message Object Prompt, Default Model, Attempt JSON Repair (Example 3)](#groq-message-object-prompt-default-model-attempt-json-repair-example-3)
- [Cloudflare AI: Simple Prompt, Passing Account ID (Example 4)](#cloudflare-ai-simple-prompt-passing-account-id-example-4)
- [watsonx.ai: Simple Prompt, Passing Space ID (Example 5)](#watsonxai-simple-prompt-passing-space-id-example-5)
3. [The Message Object](#the-message-object)
- [Structure of a Message Object](#structure-of-a-message-object)
4. [Accessing LLMInterface Variables](#accessing-llminterface-variables)
Expand Down Expand Up @@ -65,7 +67,33 @@ or the ES6 `import` syntax:
import { LLMInterfaceSendMessage } from 'llm-interface';
```

Then call call the `LLMInterfaceSendMessage` function. Here are a few examples:
Then call call the `LLMInterfaceSendMessage` function. It expects the following arguments:

- `provider` (string) - A valid LLM provider, the following are valid choices:
- ai21
- anthropic
- cloudflareai
- cohere
- fireworksai
- gemini
- gooseai
- groq
- huggingface
- llamacpp
- mistralai
- openai
- perplexity
- rekaai
- watsonxai
- `key` (string or array) - A valid API key, or if the provider requires a secondary value such as Cloudflare AI's Account ID or watsonx.ai's Space ID, an array containing both values. The following would be valid:
- apiKey
- [apiKey,accountId]
- [apiKey,spaceId]
- `message` (string or object) - A simple string containing a single prompt, or a complex object holding an entire conversation.
- `options` (object) - An optional object that contains any LLM provider specific options you would like to pass through. This is also useful for specifying a max_tokens or model value.
- `interfaceOptions` (object) - An optional object that contains llm-interface specific options such as the cacheTimeoutSeconds and retryAttempts.

Here are a few examples:

### OpenAI: Simple Text Prompt, Default Model (Example 1)

Expand Down Expand Up @@ -137,9 +165,47 @@ LLMInterfaceSendMessage(
});
```

### Cloudflare AI: Simple Prompt, Passing Account ID (Example 4)

Ask Cloudflare AI for a response using a message string with the default model.

```javascript
LLMInterfaceSendMessage(
'cloudflareai',
[process.env.CLOUDFLARE_API_KEY, process.env.CLOUDFLARE_ACCOUNT_ID],
'Explain the importance of low latency LLMs.',
)
.then((response) => {
console.log(response.results);
})
.catch((error) => {
console.error(error);
});
```

### watsonx.ai: Simple Prompt, Passing Space ID (Example 5)

Ask watsonx.ai for a response using a message string with the default model.

```javascript
LLMInterfaceSendMessage(
'watsonxai',
[process.env.WATSONXAI_API_KEY, process.env.WATSONXAI_SPACE_ID],
'Explain the importance of low latency LLMs.',
)
.then((response) => {
console.log(response.results);
})
.catch((error) => {
console.error(error);
});
```

## Valid `LLMInterfaceSendMessage` function

## The Message Object

The message object is a critical component when interacting with the various LLM APIs through the `llm-interface` package. It contains the data that will be sent to the LLM for processing. Below is a detailed explanation of a valid message object.
The message object is a critical component when interacting with the various LLM APIs through the `llm-interface` package. It contains the data that will be sent to the LLM for processing and allows for complex conversations. Below is a detailed explanation of a valid message object.

### Structure of a Message Object

Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "llm-interface",
"version": "2.0.5",
"version": "2.0.6",
"main": "src/index.js",
"description": "A simple, unified interface for integrating and interacting with multiple Large Language Model (LLM) APIs, including OpenAI, AI21 Studio, Anthropic, Cloudflare AI, Cohere, Fireworks AI, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Perplexity, Reka AI, and LLaMA.cpp.",
"type": "commonjs",
Expand Down
3 changes: 3 additions & 0 deletions src/config/config.js
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,7 @@ module.exports = {
cloudflareaiApiKey: process.env.CLOUDFLARE_API_KEY,
cloudflareaiAccountId: process.env.CLOUDFLARE_ACCOUNT_ID,
fireworksaiApiKey: process.env.FIREWORKSAI_API_KEY,
watsonxaiApiKey: process.env.WATSONXSAI_API_KEY,
watsonxaiSpaceId: process.env.WATSONXSAI_SPACE_ID,
friendliaiApiKey: process.env.FRIENDLIAI_API_KEY,
};
10 changes: 9 additions & 1 deletion src/config/llmProviders.json
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,15 @@
"model": {
"default": { "name": "mixtral-8x7b-instruct-v0-1", "tokens": 32768 },
"large": { "name": "meta-llama-3-70b-instruct", "tokens": 8192 },
"small": { "name": "mistralai-7b-instruct-v0-2", "tokens": 4096 }
"small": { "name": "mistral-7b-instruct-v0-2", "tokens": 4096 }
}
},
"watsonxai": {
"url": "https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-02",
"model": {
"default": { "name": "meta-llama/llama-2-13b-chat", "tokens": 4096 },
"large": { "name": "meta-llama/llama-3-70b-instruct", "tokens": 8192 },
"small": { "name": "google/flan-t5-xxl", "tokens": 512 }
}
}
}
174 changes: 174 additions & 0 deletions src/interfaces/watsonxai.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
/**
* @file src/interfaces/watsonxai.js
* @class WatsonX
* @description Wrapper class for the watsonx.ai API.
* @param {string} apiKey - The API key for the watsonx.ai API.
*/

const axios = require('axios');
const { adjustModelAlias, getModelByAlias } = require('../utils/config.js');
const { getFromCache, saveToCache } = require('../utils/cache.js');
const { getMessageObject } = require('../utils/utils.js');
const { watsonxaiApiKey, watsonxaiSpaceId } = require('../config/config.js');
const { getConfig } = require('../utils/configManager.js');
const config = getConfig();
const log = require('loglevel');

// WatsonX class for interacting with the watsonx.ai API
class watsonxai {
/**
* Constructor for the WatsonX class.
* @param {string} apiKey - The API key for the watsonx.ai API.
*/
constructor(apiKey, spaceId) {
this.interfaceName = 'watsonxai';
this.apiKey = apiKey || watsonxaiApiKey;
this.spaceId = spaceId || watsonxaiSpaceId;
this.bearerToken = null;
this.tokenExpiration = null;
this.client = axios.create({
baseURL: config[this.interfaceName].url,
headers: {
'Content-type': 'application/json',
},
});
}

/**
* Get a bearer token using the provided API key.
* If a valid token exists and is not expired, reuse it.
* Otherwise, refresh the token.
* @returns {Promise<void>}
*/
async getBearerToken() {
if (this.bearerToken && this.tokenExpiration > Date.now() / 1000) {
return; // Token is still valid
}

try {
const response = await axios.post(
'https://iam.cloud.ibm.com/identity/token',
null,
{
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
},
params: {
grant_type: 'urn:ibm:params:oauth:grant-type:apikey',
apikey: this.apiKey,
},
},
);

this.bearerToken = response.data.access_token;
this.tokenExpiration = response.data.expiration;
this.client.defaults.headers.Authorization = `Bearer ${this.bearerToken}`;
} catch (error) {
log.error(
'Failed to get bearer token:',
error.response ? error.response.data : error.message,
);
throw error;
}
}

/**
* Send a message to the watsonx.ai API.
* @param {string|object} message - The message to send or a message object.
* @param {object} options - Additional options for the API request.
* @param {object} interfaceOptions - Options specific to the interface.
* @returns {Promise<string>} The response content from the watsonx.ai API.
*/
async sendMessage(message, options = {}, interfaceOptions = {}) {
await this.getBearerToken(); // Ensure the bearer token is valid

const messageObject =
typeof message === 'string' ? getMessageObject(message) : message;
const cacheTimeoutSeconds =
typeof interfaceOptions === 'number'
? interfaceOptions
: interfaceOptions.cacheTimeoutSeconds;

const { messages } = messageObject;
const { max_tokens = 150, space_id } = options;
let { model } = messageObject;

model = getModelByAlias(this.interfaceName, model);
model =
model || options.model || config[this.interfaceName].model.default.name;

const formattedPrompt = messages
.map((message) => message.content)
.join(' ');

const payload = {
model_id: model,
input: formattedPrompt,
parameters: {
max_new_tokens: max_tokens,
time_limit: options.time_limit || 1000,
},
space_id: space_id || this.spaceId,
};

const cacheKey = JSON.stringify(payload);
if (cacheTimeoutSeconds) {
const cachedResponse = getFromCache(cacheKey);
if (cachedResponse) {
return cachedResponse;
}
}

let retryAttempts = interfaceOptions.retryAttempts || 0;
let currentRetry = 0;
while (retryAttempts >= 0) {
try {
const url = '';
const response = await this.client.post(url, payload);
let responseContent = null;
if (
response &&
response.data &&
response.data.results &&
response.data.results[0] &&
response.data.results[0].generated_text
) {
responseContent = response.data.results[0].generated_text.trim();
}

if (interfaceOptions.attemptJsonRepair) {
responseContent = await parseJSON(
responseContent,
interfaceOptions.attemptJsonRepair,
);
}
responseContent = { results: responseContent };

if (cacheTimeoutSeconds && responseContent) {
saveToCache(cacheKey, responseContent, cacheTimeoutSeconds);
}

return responseContent;
} catch (error) {
retryAttempts--;
if (retryAttempts < 0) {
log.error(
'Response data:',
error.response ? error.response.data : null,
);
throw error;
}

let retryMultiplier = interfaceOptions.retryMultiplier || 0.3;
const delay = (currentRetry + 1) * retryMultiplier * 1000;

await new Promise((resolve) => setTimeout(resolve, delay));
currentRetry++;
}
}
}
}

watsonxai.prototype.adjustModelAlias = adjustModelAlias;

module.exports = watsonxai;
45 changes: 45 additions & 0 deletions test/simple/cloudflareai.test.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
/**
* @file test/simple/cloudflareai.test.js
* @description Simplified tests for the Cloudflare AI API client.
*/

const CloudflareAI = require('../../src/interfaces/cloudflareai.js');
const {
cloudflareaiApiKey,
cloudflareaiAccountId,
} = require('../../src/config/config.js');
const {
simplePrompt,
options,
expectedMaxLength,
} = require('../../src/utils/defaults.js');
const { safeStringify } = require('../../src/utils/jestSerializer.js');

describe('Cloudflare AI Simple', () => {
if (cloudflareaiApiKey) {
let response;
test('API Key should be set', async () => {
expect(typeof cloudflareaiApiKey).toBe('string');
});

test('API Client should send a message and receive a response', async () => {
const cloudflareai = new CloudflareAI(
cloudflareaiApiKey,
cloudflareaiAccountId,
);

try {
response = await cloudflareai.sendMessage(simplePrompt, options);
} catch (error) {
throw new Error(`Test failed: ${safeStringify(error)}`);
}
expect(typeof response).toStrictEqual('object');
}, 30000);

test(`Response should be less than ${expectedMaxLength} characters`, async () => {
expect(response.results.length).toBeLessThan(expectedMaxLength);
});
} else {
test.skip(`API Key is not set`, () => {});
}
});
Loading

0 comments on commit b583582

Please sign in to comment.