diff --git a/.gitignore b/.gitignore index 6a7d6d8..0c4da45 100644 --- a/.gitignore +++ b/.gitignore @@ -127,4 +127,6 @@ dist .yarn/unplugged .yarn/build-state.yml .yarn/install-state.gz -.pnp.* \ No newline at end of file +.pnp.* + +/src/cache \ No newline at end of file diff --git a/README.md b/README.md index 7153e17..4276c9a 100644 --- a/README.md +++ b/README.md @@ -6,34 +6,35 @@ ## Introduction -The LLM Interface project is a versatile and comprehensive wrapper designed to interact with multiple Large Language Model (LLM) APIs. It simplifies integrating various LLM providers, including **OpenAI, AI21 Studio, Anthropic, Cohere, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Perplexity, Reka AI, and LLaMA.cpp**, into your applications. This project aims to provide a simplified and unified interface for sending messages and receiving responses from different LLM services, making it easier for developers to work with multiple LLMs without worrying about the specific intricacies of each API. +The LLM Interface project is a versatile and comprehensive wrapper designed to interact with multiple Large Language Model (LLM) APIs. It simplifies integrating various LLM providers, including **OpenAI, AI21 Studio, Anthropic, Cloudflare AI, Cohere, Fireworks AI, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Perplexity, Reka AI, and LLaMA.cpp**, into your applications. This project aims to provide a simplified and unified interface for sending messages and receiving responses from different LLM services, making it easier for developers to work with multiple LLMs without worrying about the specific intricacies of each API. ## Features -- **Unified Interface**: A single, consistent interface to interact with multiple LLM APIs. -- **Dynamic Module Loading**: Automatically loads and manages different LLM LLMInterface. +- **Unified Interface**: `LLMInterfaceSendMessage` is a single, consistent interface to interact with fourteen different LLM APIs. +- **Dynamic Module Loading**: Automatically loads and manages different LLM LLMInterfaces. - **Error Handling**: Robust error handling mechanisms to ensure reliable API interactions. - **Extensible**: Easily extendable to support additional LLM providers as needed. -- **JSON Output**: Simple to use JSON output for OpenAI and Gemini responses. - **Response Caching**: Efficiently caches LLM responses to reduce costs and enhance performance. - **Graceful Retries**: Automatically retry failed prompts with increasing delays to ensure successful responses. +- **JSON Output**: Simple to use native JSON output for OpenAI, Fireworks AI, and Gemini responses. +- **JSON Repair**: Detect and repair invalid JSON responses. ## Updates +**v2.0.0** + +- **New LLM Providers**: Added support for Cloudflare AI, and Fireworks AI +- **JSON Consistency**: A breaking change has been introduced: all responses now return as valid JSON objects. +- **JSON Repair**: Use `interfaceOptions.attemptJsonRepair` to repair invalid JSON responses when they occur. +- **Interface Name Changes**:`reka` becomes `rekaai`, `goose` becomes `gooseai`, `mistral` becomes `mistralai`. +- **Deprecated**: `handlers` has been removed. + **v1.0.01** - **LLMInterfaceSendMessage**: Send a message to any LLM provider without creating a new instance of the `llm-interface`. - **Model Aliases**: Simplified model selection, `default`, `small`, and `large` model aliases now available. - **Major Refactor**: Improved comments, test cases, centralized LLM provider definitions. -**v1.0.00** - -- **Initial 1.0 Release** - -**v0.0.11** - -- **Simple Prompt Handler**: Added support for simplified prompting. - ## Dependencies The project relies on several npm packages and APIs. Here are the primary dependencies: @@ -45,6 +46,7 @@ The project relies on several npm packages and APIs. Here are the primary depend - `openai`: SDK for interacting with the OpenAI API. - `dotenv`: For managing environment variables. Used by test cases. - `flat-cache`: For caching API responses to improve performance and reduce redundant requests. +- `jsonrepair`: Used to repair invalid JSON responses. - `jest`: For running test cases. ## Installation @@ -59,23 +61,21 @@ npm install llm-interface ### Example -Import `llm-interface` using: +First import `LLMInterfaceSendMessage`. You can do this using either the CommonJS `require` syntax: ```javascript -const LLMInterface = require('llm-interface'); +const { LLMInterfaceSendMessage } = require('llm-interface'); ``` -or +or the ES6 `import` syntax: ```javascript -import LLMInterface from 'llm-interface'; +import { LLMInterfaceSendMessage } from 'llm-interface'; ``` -then call the handler you want to use: +then send your prompt to the LLM provider of your choice: ```javascript -const openai = new LLMInterface.openai(process.env.OPENAI_API_KEY); - const message = { model: 'gpt-3.5-turbo', messages: [ @@ -84,10 +84,11 @@ const message = { ], }; -openai - .sendMessage(message, { max_tokens: 150 }) +LLMInterfaceSendMessage('openai', process.env.OPENAI_API_KEY, message, { + max_tokens: 150, +}) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -97,12 +98,13 @@ openai or if you want to keep things _simple_ you can use: ```javascript -const openai = new LLMInterface.openai(process.env.OPENAI_API_KEY); - -openai - .sendMessage('Explain the importance of low latency LLMs.') +LLMInterfaceSendMessage( + 'openai', + process.env.OPENAI_API_KEY, + 'Explain the importance of low latency LLMs.', +) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -119,6 +121,13 @@ The project includes tests for each LLM handler. To run the tests, use the follo npm test ``` +#### Test Results (v2.0.0) + +```bash +Test Suites: 43 passed, 43 total +Tests: 172 passed, 172 total +``` + ## Contribute Contributions to this project are welcome. Please fork the repository and submit a pull request with your changes or improvements. diff --git a/babel.config.js b/babel.config.js new file mode 100644 index 0000000..464bfb4 --- /dev/null +++ b/babel.config.js @@ -0,0 +1,4 @@ +module.exports = { + presets: [['@babel/preset-env', { targets: { node: 'current' } }]], + plugins: ['@babel/plugin-syntax-dynamic-import'], +}; diff --git a/docs/API.md b/docs/API.md index 8f44b3f..b766996 100644 --- a/docs/API.md +++ b/docs/API.md @@ -1,36 +1,102 @@ -# llm-interface +# API Reference + +## Table of Contents + +1. [LLMInterfaceSendMessage Function](#llminterfacesendmessage-function) +2. [Valid `llmProvider` Values](#valid-llmprovider-values) + - [AI21 Studio](#ai21---ai21-studio) + - [Anthropic](#anthropic---anthropic) + - [Cloudflare AI](#cloudflareai---cloudflare-ai) + - [Cohere](#cohere---cohere) + - [Fireworks AI](#fireworksai---fireworks-ai) + - [Google Gemini](#gemini---google-gemini) + - [Goose AI](#gooseai---goose-ai) + - [Groq](#groq---groq) + - [Hugging Face](#huggingface---hugging-face) + - [LLaMA.cpp](#llamacpp---llamacpp) + - [Mistral AI](#mistralai---mistral-ai) + - [OpenAI](#openai---openai) + - [Perplexity](#perplexity---perplexity) + - [Reka AI](#rekaai---reka-ai) +3. [Underlying Classes](#underlying-classes) + - [OpenAI](#openai) + - [AI21](#ai21) + - [Anthropic](#anthropic) + - [Cloudflare AI](#cloudflare-ai) + - [Cohere](#cohere) + - [Gemini](#gemini) + - [Goose AI](#goose-ai) + - [Groq](#groq) + - [Hugging Face](#hugging-face) + - [Mistral AI](#mistral-ai) + - [Perplexity Labs](#perplexity-labs) + - [Reka AI](#reka-ai) + - [LLaMA.cpp](#llamacpp) + +## LLMInterfaceSendMessage Function + +#### `LLMInterfaceSendMessage(llmProvider, apiKey, message, options, cacheTimeoutSeconds)` -## API Reference +- **Parameters:** + - `llmProvider`: A string containing a valid llmProvider name. + - `apiKey`: A string containing a valid API key, or an array containing a valid API key and account id. + - `message`: An object containing the model and messages or a string containing a single message to send. + - `options`: An optional object containing `max_tokens`, `model`, and `response_format`. + - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` (default:0), `attemptJsonRepair` (default: false), `retryAttempts` (default: 1). and `retryMultiplier` (default: 0.3). +- **Returns:** A promise that resolves to a response JSON object. -### The Message Object +##### Example: -The message object is a critical component when interacting with the various LLM APIs through the `llm-interface` package. It contains the data that will be sent to the LLM for processing. Below is a detailed explanation of a valid message object. +```javascript +LLMInterfaceSendMessage('openai', process.env.OPENAI_API_KEY, message, { + max_tokens: 150, +}) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` -#### Structure of a Message Object +## Valid `llmProvider` Values -A valid message object typically includes the following properties: +The following are supported LLM providers (in alphabetical order): -- `model`: A string specifying the model to use for the request (optional). -- `messages`: An array of message objects that form the conversation history. +- `ai21` - AI21 Studio +- `anthropic` - Anthropic +- `cloudflareai` - Cloudflare AI +- `cohere` - Cohere +- `fireworksai` - Fireworks AI +- `gemini` - Google Gemini +- `gooseai` - Goose AI +- `groq` - Groq +- `huggingface` - Hugging Face +- `llamacpp` - LLaMA.cpp +- `mistralai` - Mistral AI +- `openai` - OpenAI +- `perplexity` - Perplexity +- `rekaai` - Reka AI -Different LLMs may have their own message object rules. For example, both Anthropic and Gemini always expect the initial message to have the `user` role. Please be aware of this and structure your message objects accordingly. However, `llm-interface` will attempt to auto-correct invalid objects where possible. +## Underlying Classes ### OpenAI #### `sendMessage(message, options, cacheTimeoutSeconds)` - **Parameters:** - - `message`: An object containing the model and messages to send. + - `message`: An object containing the model and messages or a string containing a single message to send. - `options`: An optional object containing `max_tokens`, `model`, and `response_format`. - - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` and `retryAttempts`. -- **Returns:** A promise that resolves to the response text. -- **Example:** + - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` (default:0), `attemptJsonRepair` (default: false), `retryAttempts` (default: 1). and `retryMultiplier` (default: 0.3). +- **Returns:** A promise that resolves to a response JSON object. + +##### Example: ```javascript openai .sendMessage(message, { max_tokens: 150, response_format: 'json_object' }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -42,17 +108,18 @@ openai #### `sendMessage(message, options, cacheTimeoutSeconds)` - **Parameters:** - - `message`: An object containing the model and messages to send. - - `options`: An optional object containing `max_tokens` and `model`. - - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` and `retryAttempts`. -- **Returns:** A promise that resolves to the response text. -- **Example:** + - `message`: An object containing the model and messages or a string containing a single message to send. + - `options`: An optional object containing `max_tokens`, `model`, and any other LLM specific values. + - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` (default:0), `attemptJsonRepair` (default: false), `retryAttempts` (default: 1). and `retryMultiplier` (default: 0.3). +- **Returns:** A promise that resolves to a response JSON object. + +##### Example: ```javascript ai21 .sendMessage(message, { max_tokens: 150 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -64,17 +131,41 @@ ai21 #### `sendMessage(message, options, cacheTimeoutSeconds)` - **Parameters:** - - `message`: An object containing the model and messages to send. - - `options`: An optional object containing `max_tokens` and `model`. - - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` and `retryAttempts`. -- **Returns:** A promise that resolves to the response text. -- **Example:** + - `message`: An object containing the model and messages or a string containing a single message to send. + - `options`: An optional object containing `max_tokens`, `model`, and any other LLM specific values. + - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` (default:0), `attemptJsonRepair` (default: false), `retryAttempts` (default: 1). and `retryMultiplier` (default: 0.3). +- **Returns:** A promise that resolves to a response JSON object. + +##### Example: ```javascript anthropic .sendMessage(message, { max_tokens: 150 }) .then((response) => { - console.log(response); + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +### Cloudflare AI + +#### `sendMessage(message, options, cacheTimeoutSeconds)` + +- **Parameters:** + - `message`: An object containing the model and messages or a string containing a single message to send. + - `options`: An optional object containing `max_tokens`, `model`, and any other LLM specific values. + - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` (default:0), `attemptJsonRepair` (default: false), `retryAttempts` (default: 1). and `retryMultiplier` (default: 0.3). +- **Returns:** A promise that resolves to a response JSON object. + +##### Example: + +```javascript +cloudflareai + .sendMessage(message, { max_tokens: 150 }) + .then((response) => { + console.log(response.results); }) .catch((error) => { console.error(error); @@ -86,17 +177,18 @@ anthropic #### `sendMessage(message, options, cacheTimeoutSeconds)` - **Parameters:** - - `message`: An object containing the model and messages to send. - - `options`: An optional object containing `max_tokens` and `model`. - - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` and `retryAttempts`. -- **Returns:** A promise that resolves to the response text. -- **Example:** + - `message`: An object containing the model and messages or a string containing a single message to send. + - `options`: An optional object containing `max_tokens`, `model`, and any other LLM specific values. + - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` (default:0), `attemptJsonRepair` (default: false), `retryAttempts` (default: 1). and `retryMultiplier` (default: 0.3). +- **Returns:** A promise that resolves to a response JSON object. + +##### Example: ```javascript cohere .sendMessage(message, { max_tokens: 150 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -108,17 +200,18 @@ cohere #### `sendMessage(message, options, cacheTimeoutSeconds)` - **Parameters:** - - `message`: An object containing the model and messages to send. + - `message`: An object containing the model and messages or a string containing a single message to send. - `options`: An optional object containing `max_tokens`, `model`, and `response_format`. - - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` and `retryAttempts`. -- **Returns:** A promise that resolves to the response text. -- **Example:** + - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` (default:0), `attemptJsonRepair` (default: false), `retryAttempts` (default: 1). and `retryMultiplier` (default: 0.3). +- **Returns:** A promise that resolves to a response JSON object. + +##### Example: ```javascript gemini .sendMessage(message, { max_tokens: 100 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -130,17 +223,18 @@ gemini #### `sendMessage(message, options, cacheTimeoutSeconds)` - **Parameters:** - - `message`: An object containing the model and messages to send. + - `message`: An object containing the model and messages or a string containing a single message to send. - `options`: An optional object containing `max_tokens`, and `model`. - - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` and `retryAttempts`. -- **Returns:** A promise that resolves to the response text. -- **Example:** + - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` (default:0), `attemptJsonRepair` (default: false), `retryAttempts` (default: 1). and `retryMultiplier` (default: 0.3). +- **Returns:** A promise that resolves to a response JSON object. + +##### Example: ```javascript -goose +gooseai .sendMessage(message, { max_tokens: 100 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -152,17 +246,18 @@ goose #### `sendMessage(message, options, cacheTimeoutSeconds)` - **Parameters:** - - `message`: An object containing the model and messages to send. - - `options`: An optional object containing `max_tokens` and `model`. - - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` and `retryAttempts`. -- **Returns:** A promise that resolves to the response text. -- **Example:** + - `message`: An object containing the model and messages or a string containing a single message to send. + - `options`: An optional object containing `max_tokens`, `model`, and any other LLM specific values. + - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` (default:0), `attemptJsonRepair` (default: false), `retryAttempts` (default: 1). and `retryMultiplier` (default: 0.3). +- **Returns:** A promise that resolves to a response JSON object. + +##### Example: ```javascript groq .sendMessage(message, { max_tokens: 100 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -174,17 +269,18 @@ groq #### `sendMessage(message, options, cacheTimeoutSeconds)` - **Parameters:** - - `message`: An object containing the model and messages to send. - - `options`: An optional object containing `max_tokens` and `model`. - - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` and `retryAttempts`. -- **Returns:** A promise that resolves to the response text. -- **Example:** + - `message`: An object containing the model and messages or a string containing a single message to send. + - `options`: An optional object containing `max_tokens`, `model`, and any other LLM specific values. + - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` (default:0), `attemptJsonRepair` (default: false), `retryAttempts` (default: 1). and `retryMultiplier` (default: 0.3). +- **Returns:** A promise that resolves to a response JSON object. + +##### Example: ```javascript huggingface .sendMessage(message, { max_tokens: 100 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -196,17 +292,18 @@ huggingface #### `sendMessage(message, options, cacheTimeoutSeconds)` - **Parameters:** - - `message`: An object containing the model and messages to send. - - `options`: An optional object containing `max_tokens` and `model`. - - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` and `retryAttempts`. -- **Returns:** A promise that resolves to the response text. -- **Example:** + - `message`: An object containing the model and messages or a string containing a single message to send. + - `options`: An optional object containing `max_tokens`, `model`, and any other LLM specific values. + - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` (default:0), `attemptJsonRepair` (default: false), `retryAttempts` (default: 1). and `retryMultiplier` (default: 0.3). +- **Returns:** A promise that resolves to a response JSON object. + +##### Example: ```javascript -mistral +mistralai .sendMessage(message, { max_tokens: 100 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -218,17 +315,18 @@ mistral #### `sendMessage(message, options, cacheTimeoutSeconds)` - **Parameters:** - - `message`: An object containing the model and messages to send. - - `options`: An optional object containing `max_tokens` and `model`. - - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` and `retryAttempts`. -- **Returns:** A promise that resolves to the response text. -- **Example:** + - `message`: An object containing the model and messages or a string containing a single message to send. + - `options`: An optional object containing `max_tokens`, `model`, and any other LLM specific values. + - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` (default:0), `attemptJsonRepair` (default: false), `retryAttempts` (default: 1). and `retryMultiplier` (default: 0.3). +- **Returns:** A promise that resolves to a response JSON object. + +##### Example: ```javascript perplexity .sendMessage(message, { max_tokens: 100 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -240,17 +338,18 @@ perplexity #### `sendMessage(message, options, cacheTimeoutSeconds)` - **Parameters:** - - `message`: An object containing the model and messages to send. - - `options`: An optional object containing `max_tokens` and `model`. - - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` and `retryAttempts`. -- **Returns:** A promise that resolves to the response text. -- **Example:** + - `message`: An object containing the model and messages or a string containing a single message to send. + - `options`: An optional object containing `max_tokens`, `model`, and any other LLM specific values. + - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` (default:0), `attemptJsonRepair` (default: false), `retryAttempts` (default: 1). and `retryMultiplier` (default: 0.3). +- **Returns:** A promise that resolves to a response JSON object. + +##### Example: ```javascript -reka +rekaai .sendMessage(message, {}) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -262,17 +361,18 @@ reka #### `sendMessage(message, options, cacheTimeoutSeconds)` - **Parameters:** - - `message`: An object containing the model and messages to send. + - `message`: An object containing the model and messages or a string containing a single message to send. - `options`: An optional object containing `max_tokens`. - - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` and `retryAttempts`. -- **Returns:** A promise that resolves to the response text. -- **Example:** + - `interfaceOptions`: An optional object specifying `cacheTimeoutSeconds` (default:0), `attemptJsonRepair` (default: false), `retryAttempts` (default: 1). and `retryMultiplier` (default: 0.3). +- **Returns:** A promise that resolves to a response JSON object. + +##### Example: ```javascript llamacpp .sendMessage(message, { max_tokens: 100 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); diff --git a/docs/APIKEYS.md b/docs/APIKEYS.md index c64dd9d..e222a4e 100644 --- a/docs/APIKEYS.md +++ b/docs/APIKEYS.md @@ -1,4 +1,4 @@ -# llm-interface +# API Keys Getting API keys for your project is a simple process. You'll need to sign-up, then visit the URLs below generate your desired API keys. However, most LLMs require a credit card. @@ -20,12 +20,24 @@ The Anthropic API requires a credit card. - https://console.anthropic.com/settings/keys +## CloudflareAI + +The Cloudflare AI API offers a free tier and and commercial accounts. A credit is not required for for the free tier. + +- https://dash.cloudflareai.com/profile/api-tokens + ## Cohere The Cohere API offers trial keys. Trial keys are rate-limited, and cannot be used for commercial purposes. - https://dashboard.cohere.com/api-keys +## Fireworks AI + +The Fireworks AI API offers a free developer tier and commercial accounts. A Credit is not required for the free developer tier. + +- https://fireworks.ai/api-keys + ## Gemini The Gemini API is currently free. @@ -50,11 +62,11 @@ The Hugging Face Inference API is currently free for rate-limited, non-commercia - https://huggingface.co/settings/tokens -## Mistral +## MistralAI -The Mistral API is a commercial product, but it currently does not require a credit card, and comes with a $5.00 credit. +The MistralAI API is a commercial product, but it currently does not require a credit card, and comes with a $5.00 credit. -- https://console.mistral.ai/api-keys/ +- https://console.mistralai.ai/api-keys/ ## Perplexity diff --git a/docs/MODELS.md b/docs/MODELS.md new file mode 100644 index 0000000..baf6a51 --- /dev/null +++ b/docs/MODELS.md @@ -0,0 +1,91 @@ +# Models + +`llm-prepare` provides three different model aliases for each LLM provider. If a model is not specified, `llm-prepare` will always use the `default`. + +## Model Aliases + +To make using `llm-interface` easier, you can take advantage of model aliases: + +- `default` +- `large` +- `small` + +When `default` or no model is passed, the system will use the default model for the LLM provider. If you'd prefer to specify your model by size instead of name, pass `large` or `small`. + +### OpenAI + +- `default`: GPT-3.5-turbo (tokens: 16,385) +- `large`: GPT-4.0 (tokens: 128,000) +- `small`: Davinci-002 (tokens: 16,384) + +### AI21 + +- `default`: Jamba-Instruct (tokens: 256,000) +- `large`: Jamba-Instruct (tokens: 256,000) +- `small`: J2-Light (tokens: 2,048) + +### Anthropic + +- `default`: Claude-3-Opus-20240229 (tokens: 200,000) +- `large`: Claude-3-Opus-20240229 (tokens: 200,000) +- `small`: Claude-3-Haiku-20240307 (tokens: 200,000) + +### Cloudflare AI + +- `default`: Llama-3-8B-Instruct (tokens: 4,096) +- `large`: Llama-2-13B-Chat-AWQ (tokens: 8,192) +- `small`: TinyLlama-1.1B-Chat-v1.0 (tokens: 2,048) + +### Cohere + +- `default`: Command-R (tokens: 128,000) +- `large`: Command-R-Plus (tokens: 128,000) +- `small`: Medium (tokens: 2,048) + +### Fireworks AI + +- `default`: Llama-v3-8B-Instruct (tokens: 8,192) +- `large`: Llama-v3-70B-Instruct (tokens: 8,192) +- `small`: Phi-3-Mini-128K-Instruct (tokens: 128,000) + +### Gemini + +- `default`: Gemini-1.5-Flash (tokens: 1,048,576) +- `large`: Gemini-1.5-Pro (tokens: 1,048,576) +- `small`: Gemini-Small + +### Goose AI + +- `default`: GPT-Neo-20B (tokens: 2,048) +- `large`: GPT-Neo-20B (tokens: 2,048) +- `small`: GPT-Neo-125M (tokens: 2,048) + +### Groq + +- `default`: Llama3-8B-8192 (tokens: 8,192) +- `large`: Llama3-70B-8192 (tokens: 8,192) +- `small`: Gemma-7B-IT (tokens: 8,192) + +### Hugging Face + +- `default`: Meta-Llama/Meta-Llama-3-8B-Instruct (tokens: 8,192) +- `large`: Meta-Llama/Meta-Llama-3-8B-Instruct (tokens: 8,192) +- `small`: Microsoft/Phi-3-Mini-4K-Instruct (tokens: 4,096) + +### Mistral AI + +- `default`: Mistral-Large-Latest (tokens: 32,768) +- `large`: Mistral-Large-Latest (tokens: 32,768) +- `small`: Mistral-Small (tokens: 32,768) + +### Perplexity + +- `default`: Llama-3-Sonar-Large-32K-Online (tokens: 28,000) +- `large`: Llama-3-Sonar-Large-32K-Online (tokens: 28,000) +- `small`: Llama-3-Sonar-Small-32K-Online (tokens: 28,000) + +### Reka AI + +- `default`: Reka-Core +- `large`: Reka-Core +- `small`: Reka-Edge diff --git a/docs/USAGE.md b/docs/USAGE.md index 778ea0c..f9ed07b 100644 --- a/docs/USAGE.md +++ b/docs/USAGE.md @@ -1,49 +1,155 @@ -# llm-interface +# Usage + +The following guide was created to help you use `llm-interface` in your project. It assumes you have already installed the `llm-interface` NPM package. ## Table of Contents -- [Initializing llm-interface](#initializing-llm-interface) -- [Basic Usage Examples](#basic-usage-examples) - - [OpenAI Interface](#openai-interface) - - [AI21 Interface](#ai21-interface) - - [Anthropic Interface](#anthropic-interface) - - [Cohere Interface](#cohere-interface) - - [Gemini Interface](#gemini-interface) - - [Goose AI Interface](#goose-ai-interface) - - [Groq Interface](#groq-interface) - - [HuggingFace Interface](#huggingface-interface) - - [Mistral AI Interface](#mistral-ai-interface) - - [Perplexity Interface](#perplexity-interface) - - [Reka AI Interface](#reka-ai-interface) - - [LLaMA.cpp Interface](#llamacpp-interface) -- [Simple Usage Example](#simple-usage-example) - - [OpenAI Interface (String Based Prompt)](#openai-interface-string-based-prompt) -- [Advanced Usage Examples](#advanced-usage-examples) - - [OpenAI Interface (JSON Output)](#openai-interface-json-output) - - [OpenAI Interface (Cached)](#openai-interface-cached) - - [OpenAI Interface (Graceful Retry)](#openai-interface-graceful-retry) +1. [Introduction](#introduction) +2. [Using the `LLMInterfaceSendMessage` Function](#using-the-llminterfacesendmessage-function) + - [OpenAI: Simple Text Prompt, Default Model (Example 1)](#openai-simple-text-prompt-default-model-example-1) + - [Gemini: Simple Text Prompt, Default Model, Cached (Example 2)](#gemini-simple-text-prompt-default-model-cached-example-2) + - [Groq: Message Object Prompt, Default Model, Attempt JSON Repair (Example 3)](#groq-message-object-prompt-default-model-attempt-json-repair-example-3) +3. [The Message Object](#the-message-object) + - [Structure of a Message Object](#structure-of-a-message-object) +4. [Using the Underlying Classes](#using-the-underlying-classes) + - [OpenAI Interface Class](#openai-interface-class) + - [AI21 Interface Class](#ai21-interface-class) + - [Anthropic Interface Class](#anthropic-interface-class) + - [Cloudflare AI Interface Class](#cloudflare-ai-interface-class) + - [Cohere Interface Class](#cohere-interface-class) + - [Fireworks AI Interface Class](#fireworks-ai-interface-class) + - [Gemini Interface Class](#gemini-interface-class) + - [Goose AI Interface Class](#goose-ai-interface-class) + - [Groq Interface Class](#groq-interface-class) + - [Hugging Face Interface Class](#hugging-face-interface-class) + - [Mistral AI Interface Class](#mistral-ai-interface-class) + - [Perplexity Interface Class](#perplexity-interface-class) + - [Reka AI Interface Class](#reka-ai-interface-class) + - [LLaMA.cpp Interface Class](#llamacpp-interface-class) +5. [Simple Usage Examples](#simple-usage-examples) + - [OpenAI Interface (String Based Prompt)](#openai-interface-string-based-prompt) + - [AI21 Interface (String Based Prompt)](#ai21-interface-string-based-prompt) + - [Anthropic Interface (String Based Prompt)](#anthropic-interface-string-based-prompt) + - [Cloudflare AI Interface (String Based Prompt)](#cloudflare-ai-interface-string-based-prompt) + - [Cohere Interface (String Based Prompt)](#cohere-interface-string-based-prompt) + - [Fireworks AI Interface (String Based Prompt)](#fireworks-ai-interface-string-based-prompt) + - [Gemini Interface (String Based Prompt)](#gemini-interface-string-based-prompt) + - [Goose AI Interface (String Based Prompt)](#goose-ai-interface-string-based-prompt) + - [Groq Interface (String Based Prompt)](#groq-interface-string-based-prompt) + - [Hugging Face Interface (String Based Prompt)](#hugging-face-interface-string-based-prompt) + - [Mistral AI Interface (String Based Prompt)](#mistral-ai-interface-string-based-prompt) + - [Perplexity Interface (String Based Prompt)](#perplexity-interface-string-based-prompt) + - [Reka AI Interface (String Based Prompt)](#reka-ai-interface-string-based-prompt) + - [LLaMA.cpp Interface (String Based Prompt)](#llamacpp-interface-string-based-prompt) +6. [Advanced Usage Examples](#advanced-usage-examples) + - [OpenAI Interface (Native JSON Output)](#openai-interface-native-json-output) + - [OpenAI Interface (Native JSON Output with Repair)](#openai-interface-native-json-output-with-repair) + - [Groq Interface (JSON Output with Repair)](#groq-interface-json-output-with-repair) + - [OpenAI Interface (Cached)](#openai-interface-cached) + - [OpenAI Interface (Graceful Retry)](#openai-interface-graceful-retry) + +## Using the `LLMInterfaceSendMessage` function + +The `LLMInterfaceSendMessage` function gives you a single interface to all of the LLM providers available. To start, include the LLMInterface from the `llm-interface` package. You can do this using either the CommonJS `require` syntax: -# Usage +```javascript +const { LLMInterfaceSendMessage } = require('llm-interface'); +``` + +or the ES6 `import` syntax: -How to use `llm-interface` in your project. +```javascript +import { LLMInterfaceSendMessage } from 'llm-interface'; +``` + +Then call call the `LLMInterfaceSendMessage` function. Here are a few examples: + +### OpenAI: Simple Text Prompt, Default Model (Example 1) + +Ask OpenAi for a response using a message string with the default model, and default response token limit (150). + +```javascript +LLMInterfaceSendMessage( + 'openai', + openAiApikey, + 'Explain the importance of low latency LLMs.', +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` -## Initializing llm-interface +### Gemini: Simple Text Prompt, Default Model, Cached (Example 2) -First, require the LLMInterface from the `llm-interface` package: +Ask gemini for a response using a message string with the default model and limit the response to 250 tokens; cache the results for a day (86400 seconds). ```javascript -const LLMInterface = require('llm-interface'); +LLMInterfaceSendMessage( + 'gemini', + geminiApikey, + 'Explain the importance of low latency LLMs.', + { max_tokens: 250 }, + { cacheTimeoutSeconds: 86400 }, +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); ``` -or import it: +### Groq: Message Object Prompt, Default Model, Attempt JSON Repair (Example 3) + +Ask groq for a JSON response using a message object with the largest model limit the response to 1024 tokens; repair the results if needed. ```javascript -import LLMInterface from 'llm-interface'; +const message = { + model: 'large', + messages: [ + { role: 'system', content: 'You are a helpful assistant.' }, + { + role: 'user', + content: + 'Explain the importance of low latency LLMs. Return the results as a JSON object. Follow this format: [{reason, reasonDescription}]. Only return the JSON element, nothing else.', + }, + ], +}; + +LLMInterfaceSendMessage( + 'groq', + process.env.GROQ_API_KEY, + message, + { max_tokens: 1024 }, + { attemptJsonRepair: true }, +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); ``` -## Basic Usage Examples +## The Message Object + +The message object is a critical component when interacting with the various LLM APIs through the `llm-interface` package. It contains the data that will be sent to the LLM for processing. Below is a detailed explanation of a valid message object. + +### Structure of a Message Object + +A valid message object typically includes the following properties: + +- `model`: A string specifying the model to use for the request (optional). +- `messages`: An array of message objects that form the conversation history. + +Different LLMs may have their own message object rules. For example, both Anthropic and Gemini always expect the initial message to have the `user` role. Please be aware of this and structure your message objects accordingly. _`llm-interface` will attempt to auto-correct invalid objects where possible._ -Then select the interface you'd like to use and initialize it with an API key or LLama.cpp URL. +## Using the underlying classes + +The `LLMInterfaceSendMessage` function is a wrapper for a set of underlying interface classes. The following are examples of direct class interactions using a message object. ### OpenAI Interface @@ -65,7 +171,7 @@ const message = { openai .sendMessage(message, { max_tokens: 150 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -78,17 +184,27 @@ The AI21 interface allows you to send messages to the AI21 API. #### Example -````javascript +```javascript const ai21 = new LLMInterface.ai21(process.env.AI21_API_KEY); const message = { - model: "jamba-instruct", + model: 'jamba-instruct', messages: [ - { role: "system", content: "You are a helpful assistant." }, - { role: "user", content: "Explain the importance of low latency LLMs." }, + { role: 'system', content: 'You are a helpful assistant.' }, + { role: 'user', content: 'Explain the importance of low latency LLMs.' }, ], }; +ai21 + .sendMessage(message, { max_tokens: 150 }) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + ### Anthropic Interface The Anthropic interface allows you to send messages to the Anthropic API. @@ -99,38 +215,57 @@ The Anthropic interface allows you to send messages to the Anthropic API. const anthropic = new LLMInterface.anthropic(process.env.ANTHROPIC_API_KEY); const message = { - model: "claude-3-opus-20240229", + model: 'claude-3-opus-20240229', messages: [ { - role: "user", + role: 'user', content: - "You are a helpful assistant. Say OK if you understand and stop.", + 'You are a helpful assistant. Say OK if you understand and stop.', }, - { role: "system", content: "OK" }, - { role: "user", content: "Explain the importance of low latency LLMs." }, + { role: 'system', content: 'OK' }, + { role: 'user', content: 'Explain the importance of low latency LLMs.' }, ], }; anthropic .sendMessage(message, { max_tokens: 150 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); }); -```` +``` -ai21 -.sendMessage(message, { max_tokens: 150 }) -.then((response) => { -console.log(response); -}) -.catch((error) => { -console.error(error); -}); +### Cloudflare AI Interface + +The CloudflareAI interface allows you to send messages to the Cloudflare AI API. + +#### Example + +```javascript +const cloudflareai = new LLMInterface.cloudflareai( + process.env.CLOUDFLARE_API_KEY, + process.env.CLOUDFLARE_ACCOUNT_ID, +); + +const message = { + model: '@cf/meta/llama-3-8b-instruct', + messages: [ + { role: 'system', content: 'You are a helpful assistant.' }, + { role: 'user', content: 'Explain the importance of low latency LLMs.' }, + ], +}; -```` +cloudflareai + .sendMessage(message, { max_tokens: 100 }) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` ### Cohere Interface @@ -139,25 +274,54 @@ The Cohere interface allows you to send messages to the Cohere API. #### Example ```javascript -const cohere = new LLMInterface.cohere(process.env.GROQ_API_KEY); +const cohere = new LLMInterface.cohere(process.env.COHERE_API_KEY); const message = { - model: "gpt-neo-20b", + model: 'gpt-neo-20b', messages: [ - { role: "system", content: "You are a helpful assistant." }, - { role: "user", content: "Explain the importance of low latency LLMs." }, + { role: 'system', content: 'You are a helpful assistant.' }, + { role: 'user', content: 'Explain the importance of low latency LLMs.' }, ], }; cohere .sendMessage(message, { max_tokens: 100 }) .then((response) => { - console.log(response); + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +### Fireworks AI Interface + +The Fireworks AI interface allows you to send messages to the Fireworks AI API. + +#### Example + +```javascript +const fireworksai = new LLMInterface.fireworksai( + process.env.FIREWORKSAI_API_KEY, +); + +const message = { + model: 'accounts/fireworks/models/phi-3-mini-128k-instruct', + messages: [ + { role: 'system', content: 'You are a helpful assistant.' }, + { role: 'user', content: 'Explain the importance of low latency LLMs.' }, + ], +}; + +fireworksai + .sendMessage(message, { max_tokens: 100 }) + .then((response) => { + console.log(response.results); }) .catch((error) => { console.error(error); }); -```` +``` ### Gemini Interface @@ -179,7 +343,7 @@ const message = { gemini .sendMessage(message, { max_tokens: 100 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -193,7 +357,7 @@ The Goose AI interface allows you to send messages to the Goose AI API. #### Example ```javascript -const goose = new LLMInterface.goose(process.env.GROQ_API_KEY); +const gooseai = new LLMInterface.gooseai(process.env.GOOSEAI_API_KEY); const message = { model: 'gpt-neo-20b', @@ -203,10 +367,10 @@ const message = { ], }; -goose +gooseai .sendMessage(message, { max_tokens: 100 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -233,7 +397,7 @@ const message = { groq .sendMessage(message, { max_tokens: 100 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -247,7 +411,9 @@ The HuggingFace interface allows you to send messages to the HuggingFace API. #### Example ```javascript -const huggingface = new LLMInterface.huggingface(process.env.ANTHROPIC_API_KEY); +const huggingface = new LLMInterface.huggingface( + process.env.HUGGINGFACE_API_KEY, +); const message = { model: 'claude-3-opus-20240229', @@ -265,7 +431,7 @@ const message = { huggingface .sendMessage(message, { max_tokens: 150 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -279,7 +445,7 @@ The Mistral AI interface allows you to send messages to the Mistral AI API. #### Example ```javascript -const mistral = new LLMInterface.mistral(process.env.GROQ_API_KEY); +const mistralai = new LLMInterface.mistralai(process.env.MISTRALAI_API_KEY); const message = { model: 'llama3-8b-8192', @@ -289,10 +455,10 @@ const message = { ], }; -mistral +mistralai .sendMessage(message, { max_tokens: 100 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -306,7 +472,7 @@ The Perplexity interface allows you to send messages to the Perplexity API. #### Example ```javascript -const perplexity = new LLMInterface.perplexity(process.env.ANTHROPIC_API_KEY); +const perplexity = new LLMInterface.perplexity(process.env.PERPLEXITY_API_KEY); const message = { model: 'claude-3-opus-20240229', @@ -324,7 +490,7 @@ const message = { perplexity .sendMessage(message, { max_tokens: 150 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -338,7 +504,7 @@ The Reka AI interface allows you to send messages to the Reka AI REST API. #### Example ```javascript -const reka = new LLMInterface.reka(process.env.REKA_API_KEY); +const rekaai = new LLMInterface.rekaai(process.env.REKAAI_API_KEY); const message = { model: 'reka-core', @@ -353,7 +519,7 @@ const message = { ], }; -reka +rekaai .sendMessage(message, {}) .then((response) => console.log('Response:', response)) .catch((error) => console.error('Error:', error)); @@ -378,32 +544,507 @@ const message = { llamacpp .sendMessage(message, { max_tokens: 100 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); }); ``` -## Simple Usage Example +## Simple Usage Examples The following example demonstrates simplified use of `llm-interface`. ### OpenAI Interface (String Based Prompt) -This simplified example uses a string based prompt with the default OpenAI model (gpt-3.5-turbo). +This simplified example uses a string based prompt with the default model. #### Example ```javascript -const openai = new LLMInterface.openai(process.env.OPENAI_API_KEY); +LLMInterfaceSendMessage( + 'openai', + openAiApikey, + 'Explain the importance of low latency LLMs.', +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` -const message = 'Explain the importance of low latency LLMs.'; +or + +```javascript +const openai = new LLMInterface.openai(process.env.OPENAI_API_KEY); openai - .sendMessage(message) + .sendMessage('Explain the importance of low latency LLMs.') + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +### AI21 Interface (String Based Prompt) + +This simplified example uses a string based prompt with the default model. + +#### Example + +```javascript +LLMInterfaceSendMessage( + 'ai21', + process.env.AI21_API_KEY, + 'Explain the importance of low latency LLMs.', +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +or + +```javascript +const ai21 = new LLMInterface.ai21(process.env.AI21_API_KEY); + +ai21 + .sendMessage('Explain the importance of low latency LLMs.') + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +### Anthropic Interface (String Based Prompt) + +This simplified example uses a string based prompt with the default model. + +#### Example + +```javascript +LLMInterfaceSendMessage( + 'anthropic', + process.env.ANTHROPIC_API_KEY, + 'Explain the importance of low latency LLMs.', +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +or + +```javascript +const anthropic = new LLMInterface.anthropic(process.env.ANTHROPIC_API_KEY); + +anthropic + .sendMessage('Explain the importance of low latency LLMs.') + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +### Cloudflare AI Interface (String Based Prompt) + +This simplified example uses a string based prompt with the default model. + +#### Example + +```javascript +LLMInterfaceSendMessage( + 'cloudflareai', + [process.env.CLOUDFLARE_API_KEY, process.env.CLOUDFLARE_ACCOUNT_ID], + 'Explain the importance of low latency LLMs.', +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +or + +```javascript +const cloudflareai = new LLMInterface.cloudflareai( + process.env.CLOUDFLARE_API_KEY, +); + +cloudflareai + .sendMessage('Explain the importance of low latency LLMs.') + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +### Cohere Interface (String Based Prompt) + +This simplified example uses a string based prompt with the default model. + +#### Example + +```javascript +LLMInterfaceSendMessage( + 'cohere', + process.env.COHERE_API_KEY, + 'Explain the importance of low latency LLMs.', +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +or + +```javascript +const cohere = new LLMInterface.cohere(process.env.COHERE_API_KEY); + +cohere + .sendMessage('Explain the importance of low latency LLMs.') + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +### Fireworks AI Interface (String Based Prompt) + +This simplified example uses a string based prompt with the default model. + +#### Example + +```javascript +LLMInterfaceSendMessage( + 'fireworksai', + process.env.FIREWORKSAI_API_KEY, + 'Explain the importance of low latency LLMs.', +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +or + +```javascript +const fireworksai = new LLMInterface.fireworksai( + process.env.FIREWORKSAI_API_KEY, +); + +fireworksai + .sendMessage('Explain the importance of low latency LLMs.') + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +### Gemini Interface (String Based Prompt) + +This simplified example uses a string based prompt with the default model. + +#### Example + +```javascript +LLMInterfaceSendMessage( + 'gemini', + process.env.GEMINI_API_KEY, + 'Explain the importance of low latency LLMs.', +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +or + +```javascript +const gemini = new LLMInterface.gemini(process.env.GEMINI_API_KEY); + +gemini + .sendMessage('Explain the importance of low latency LLMs.') + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +### Goose AI Interface (String Based Prompt) + +This simplified example uses a string based prompt with the default model. + +#### Example + +```javascript +LLMInterfaceSendMessage( + 'goose', + process.env.GOOSEAI_API_KEY, + 'Explain the importance of low latency LLMs.', +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +or + +```javascript +const goose = new LLMInterface.gooseai(process.env.GOOSEAI_API_KEY); + +goose + .sendMessage('Explain the importance of low latency LLMs.') + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +### Groq Interface (String Based Prompt) + +This simplified example uses a string based prompt with the default model. + +#### Example + +```javascript +LLMInterfaceSendMessage( + 'groq', + process.env.GROQ_API_KEY, + 'Explain the importance of low latency LLMs.', +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +or + +```javascript +const groq = new LLMInterface.groq(process.env.GROQ_API_KEY); + +groq + .sendMessage('Explain the importance of low latency LLMs.') + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +### HuggingFace Interface (String Based Prompt) + +This simplified example uses a string based prompt with the default model. + +#### Example + +```javascript +LLMInterfaceSendMessage( + 'huggingface', + process.env.HUGGINGFACE_API_KEY, + 'Explain the importance of low latency LLMs.', +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +or + +```javascript +const huggingface = new LLMInterface.huggingface( + process.env.HUGGINGFACE_API_KEY, +); + +huggingface + .sendMessage('Explain the importance of low latency LLMs.') + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +### Mistral AI Interface (String Based Prompt) + +This simplified example uses a string based prompt with the default model. + +#### Example + +```javascript +LLMInterfaceSendMessage( + 'mistralai', + process.env.MISTRALAI_API_KEY, + 'Explain the importance of low latency LLMs.', +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +or + +```javascript +const mistralai = new LLMInterface.mistralai(process.env.MISTRALAI_API_KEY); + +mistralai + .sendMessage('Explain the importance of low latency LLMs.') .then((response) => { - console.log(response); + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +### Perplexity Interface (String Based Prompt) + +This simplified example uses a string based prompt with the default model. + +#### Example + +```javascript +LLMInterfaceSendMessage( + 'perplexity', + process.env.PERPLEXITY_API_KEY, + 'Explain the importance of low latency LLMs.', +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +or + +```javascript +const perplexity = new LLMInterface.perplexity(process.env.PERPLEXITY_API_KEY); + +perplexity + .sendMessage('Explain the importance of low latency LLMs.') + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +### Reka AI Interface (String Based Prompt) + +This simplified example uses a string based prompt with the default model. + +#### Example + +```javascript +LLMInterfaceSendMessage( + 'reka', + process.env.REKAAI_API_KEY, + 'Explain the importance of low latency LLMs.', +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +or + +```javascript +const reka = new LLMInterface.rekaai(process.env.REKAAI_API_KEY); + +reka + .sendMessage('Explain the importance of low latency LLMs.') + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +### LLaMA.cpp Interface (String Based Prompt) + +This simplified example uses a string based prompt. The model is set at the LLaMA.cpp web server level. + +#### Example + +```javascript +LLMInterfaceSendMessage( + 'llamacpp', + process.env.LLAMACPP_URL, + 'Explain the importance of low latency LLMs.', +) + .then((response) => { + console.log(response.results); + }) + .catch((error) => { + console.error(error); + }); +``` + +or + +```javascript +const llamacpp = new LLMInterface.llamacpp(process.env.LLAMACPP_URL); + +llamacpp + .sendMessage('Explain the importance of low latency LLMs.') + .then((response) => { + console.log(response.results); }) .catch((error) => { console.error(error); @@ -412,11 +1053,13 @@ openai ## Advanced Usage Examples -The following examples highlight some of the advanced features of `llm-interface`. +The following examples highlight some of the advanced features of `llm-interface`. Keep in mind you can mix and match _interfaceOptions_. The following are currently supported: `attemptJsonRepair` (default: false), `cacheTimeoutSeconds` (default: 0), `retryAttempts` (default: 1), and `retryMultiplier` (default: 0.3), + +To maximize performance `llm-interface` will only load dependencies when invoked through interfaceOptions. -### OpenAI Interface (JSON Output) +### OpenAI Interface (Native JSON Output) -Some interfaces allows you request the response back in JSON, currently **OpenAI** and **Gemini** are supported. To take advantage of this feature be sure to include text like "Return the results as a JSON object." and provide a desired output format like "Follow this format: [{reason, reasonDescription}]." In this example we use OpenAI and request a valid JSON object. +Some interfaces allows you request the response back in JSON, currently **OpenAI**, **FireworksAI**, and **Gemini** are supported. To take advantage of this feature be sure to include text like "Return the results as a JSON object." and provide a desired output format like "Follow this format: [{reason, reasonDescription}]." \_It's important to provide a large enough max_token size to hold the entire JSON structure returned or it will not validate, and the response will return null.) In this example we use OpenAI and request a valid JSON object. #### Example @@ -433,7 +1076,7 @@ const message = { { role: 'user', content: - 'Explain the importance of low latency LLMs. Return the results as a JSON object. Follow this format: [{reason, reasonDescription}].', + 'Explain the importance of low latency LLMs. Limit the result to two items. Return the results as a JSON object. Follow this format: [{reason, reasonDescription}].', }, ], }; @@ -441,7 +1084,76 @@ const message = { openai .sendMessage(message, { max_tokens: 150, response_format: 'json_object' }) .then((response) => { - console.log(response); + console.log(JSON.stringify(response.results)); + }) + .catch((error) => { + console.error(error); + }); +``` + +### OpenAI Interface (Native JSON Output with Repair) + +When working with JSON, you may encounter invalid JSON responses. Instead of retrying your prompt you can have `llm-interface` detect the condition and attempt to repair the object. + +#### Example + +```javascript +const openai = new LLMInterface.openai(process.env.OPENAI_API_KEY); + +const message = { + model: 'gpt-3.5-turbo', + messages: [ + { + role: 'system', + content: 'You are a helpful assistant.', + }, + { + role: 'user', + content: + 'Explain the importance of low latency LLMs. Limit the result to two items. Return the results as a JSON object. Follow this format: [{reason, reasonDescription}].', + }, + ], +}; + +openai + .sendMessage( + message, + { max_tokens: 150, response_format: 'json_object' }, + { attemptJsonRepair: true }, + ) + .then((response) => { + console.log(JSON.stringify(response.results)); + }) + .catch((error) => { + console.error(error); + }); +``` + +### Groq Interface (JSON Output with Repair) + +When using LLMs without a native JSON response_format, you may encounter badly formed JSON response. Again, instead of retrying your prompt you can have `llm-interface` detect the condition and attempt to repair the object. + +#### Example + +```javascript +const groq = new LLMInterface.groq(process.env.GROQ_API_KEY); + +const message = { + model: 'llama3-8b-8192', + messages: [ + { role: 'system', content: 'You are a helpful assistant.' }, + { + role: 'user', + content: + 'Explain the importance of low latency LLMs. Return the results as a JSON object. Follow this format: [{reason, reasonDescription}]. Only return the JSON element, nothing else.', + }, + ], +}; + +groq + .sendMessage(message, { max_tokens: 150 }, { attemptJsonRepair: true }) + .then((response) => { + console.log(response.results); }) .catch((error) => { console.error(error); @@ -468,7 +1180,7 @@ const message = { openai .sendMessage(message, { max_tokens: 150 }, { cacheTimeoutSeconds: 86400 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); @@ -495,7 +1207,7 @@ const message = { openai .sendMessage(message, { max_tokens: 150 }, { retryAttempts: 3 }) .then((response) => { - console.log(response); + console.log(response.results); }) .catch((error) => { console.error(error); diff --git a/env b/env index 9d53e80..a6bd3d7 100644 --- a/env +++ b/env @@ -2,11 +2,13 @@ OPENAI_API_KEY= GROQ_API_KEY= GEMINI_API_KEY= ANTHROPIC_API_KEY= -REKA_API_KEY= -GOOSE_API_KEY= -MISTRAL_API_KEY= +REKAAI_API_KEY= +GOOSEAI_API_KEY= +MISTRALAI_API_KEY= HUGGINGFACE_API_KEY= PERPLEXITY_API_KEY= AI21_API_KEY= -AZUREAI_API_KEY= +FIREWORKSAI_API_KEY= +CLOUDFLARE_API_KEY= +CLOUDFLARE_ACCOUNT_ID= LLAMACPP_URL=http://localhost:8080/completions \ No newline at end of file diff --git a/jest.config.js b/jest.config.js index c080bfd..f2c8496 100644 --- a/jest.config.js +++ b/jest.config.js @@ -4,6 +4,9 @@ */ module.exports = { + transform: { + '^.+\\.js$': 'babel-jest', + }, testTimeout: 30000, // Set global timeout to 30 seconds - snapshotSerializers: ['/test/utils/jestSerializer.js'], + snapshotSerializers: ['/src/utils/jestSerializer.js'], }; diff --git a/jest.setup.js b/jest.setup.js new file mode 100644 index 0000000..96654bf --- /dev/null +++ b/jest.setup.js @@ -0,0 +1,2 @@ +// jest.setup.js +require = require('esm')(module /*, options*/); diff --git a/package-lock.json b/package-lock.json index 9a0aeee..c9f8c15 100644 --- a/package-lock.json +++ b/package-lock.json @@ -15,11 +15,16 @@ "dotenv": "^16.4.5", "flat-cache": "^5.0.0", "groq-sdk": "^0.5.0", + "jsonrepair": "^3.8.0", "loglevel": "^1.9.1", "openai": "^4.52.0" }, "devDependencies": { + "@babel/core": "^7.24.7", + "@babel/plugin-syntax-dynamic-import": "^7.8.3", + "@babel/preset-env": "^7.24.7", "@eslint/js": "^9.5.0", + "babel-jest": "^29.7.0", "eslint": "^9.5.0", "globals": "^15.6.0", "jest": "^29.7.0", @@ -126,6 +131,33 @@ "node": ">=6.9.0" } }, + "node_modules/@babel/helper-annotate-as-pure": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-annotate-as-pure/-/helper-annotate-as-pure-7.24.7.tgz", + "integrity": "sha512-BaDeOonYvhdKw+JoMVkAixAAJzG2jVPIwWoKBPdYuY9b452e2rPuI9QPYh3KpofZ3pW2akOmwZLOiOsHMiqRAg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-builder-binary-assignment-operator-visitor": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-builder-binary-assignment-operator-visitor/-/helper-builder-binary-assignment-operator-visitor-7.24.7.tgz", + "integrity": "sha512-xZeCVVdwb4MsDBkkyZ64tReWYrLRHlMN72vP7Bdm3OUOuyFZExhsHUUnuWnm2/XOlAJzR0LfPpB56WXZn0X/lA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/traverse": "^7.24.7", + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, "node_modules/@babel/helper-compilation-targets": { "version": "7.24.7", "resolved": "https://registry.npmjs.org/@babel/helper-compilation-targets/-/helper-compilation-targets-7.24.7.tgz", @@ -143,6 +175,65 @@ "node": ">=6.9.0" } }, + "node_modules/@babel/helper-create-class-features-plugin": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-create-class-features-plugin/-/helper-create-class-features-plugin-7.24.7.tgz", + "integrity": "sha512-kTkaDl7c9vO80zeX1rJxnuRpEsD5tA81yh11X1gQo+PhSti3JS+7qeZo9U4RHobKRiFPKaGK3svUAeb8D0Q7eg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-annotate-as-pure": "^7.24.7", + "@babel/helper-environment-visitor": "^7.24.7", + "@babel/helper-function-name": "^7.24.7", + "@babel/helper-member-expression-to-functions": "^7.24.7", + "@babel/helper-optimise-call-expression": "^7.24.7", + "@babel/helper-replace-supers": "^7.24.7", + "@babel/helper-skip-transparent-expression-wrappers": "^7.24.7", + "@babel/helper-split-export-declaration": "^7.24.7", + "semver": "^6.3.1" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0" + } + }, + "node_modules/@babel/helper-create-regexp-features-plugin": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-create-regexp-features-plugin/-/helper-create-regexp-features-plugin-7.24.7.tgz", + "integrity": "sha512-03TCmXy2FtXJEZfbXDTSqq1fRJArk7lX9DOFC/47VthYcxyIOx+eXQmdo6DOQvrbpIix+KfXwvuXdFDZHxt+rA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-annotate-as-pure": "^7.24.7", + "regexpu-core": "^5.3.1", + "semver": "^6.3.1" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0" + } + }, + "node_modules/@babel/helper-define-polyfill-provider": { + "version": "0.6.2", + "resolved": "https://registry.npmjs.org/@babel/helper-define-polyfill-provider/-/helper-define-polyfill-provider-0.6.2.tgz", + "integrity": "sha512-LV76g+C502biUK6AyZ3LK10vDpDyCzZnhZFXkH1L75zHPj68+qc8Zfpx2th+gzwA2MzyK+1g/3EPl62yFnVttQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-compilation-targets": "^7.22.6", + "@babel/helper-plugin-utils": "^7.22.5", + "debug": "^4.1.1", + "lodash.debounce": "^4.0.8", + "resolve": "^1.14.2" + }, + "peerDependencies": { + "@babel/core": "^7.4.0 || ^8.0.0-0 <8.0.0" + } + }, "node_modules/@babel/helper-environment-visitor": { "version": "7.24.7", "resolved": "https://registry.npmjs.org/@babel/helper-environment-visitor/-/helper-environment-visitor-7.24.7.tgz", @@ -183,6 +274,20 @@ "node": ">=6.9.0" } }, + "node_modules/@babel/helper-member-expression-to-functions": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-member-expression-to-functions/-/helper-member-expression-to-functions-7.24.7.tgz", + "integrity": "sha512-LGeMaf5JN4hAT471eJdBs/GK1DoYIJ5GCtZN/EsL6KUiiDZOvO/eKE11AMZJa2zP4zk4qe9V2O/hxAmkRc8p6w==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/traverse": "^7.24.7", + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, "node_modules/@babel/helper-module-imports": { "version": "7.24.7", "resolved": "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.24.7.tgz", @@ -217,6 +322,19 @@ "@babel/core": "^7.0.0" } }, + "node_modules/@babel/helper-optimise-call-expression": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-optimise-call-expression/-/helper-optimise-call-expression-7.24.7.tgz", + "integrity": "sha512-jKiTsW2xmWwxT1ixIdfXUZp+P5yURx2suzLZr5Hi64rURpDYdMW0pv+Uf17EYk2Rd428Lx4tLsnjGJzYKDM/6A==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, "node_modules/@babel/helper-plugin-utils": { "version": "7.24.7", "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.24.7.tgz", @@ -227,6 +345,42 @@ "node": ">=6.9.0" } }, + "node_modules/@babel/helper-remap-async-to-generator": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-remap-async-to-generator/-/helper-remap-async-to-generator-7.24.7.tgz", + "integrity": "sha512-9pKLcTlZ92hNZMQfGCHImUpDOlAgkkpqalWEeftW5FBya75k8Li2ilerxkM/uBEj01iBZXcCIB/bwvDYgWyibA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-annotate-as-pure": "^7.24.7", + "@babel/helper-environment-visitor": "^7.24.7", + "@babel/helper-wrap-function": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0" + } + }, + "node_modules/@babel/helper-replace-supers": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-replace-supers/-/helper-replace-supers-7.24.7.tgz", + "integrity": "sha512-qTAxxBM81VEyoAY0TtLrx1oAEJc09ZK67Q9ljQToqCnA+55eNwCORaxlKyu+rNfX86o8OXRUSNUnrtsAZXM9sg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-environment-visitor": "^7.24.7", + "@babel/helper-member-expression-to-functions": "^7.24.7", + "@babel/helper-optimise-call-expression": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0" + } + }, "node_modules/@babel/helper-simple-access": { "version": "7.24.7", "resolved": "https://registry.npmjs.org/@babel/helper-simple-access/-/helper-simple-access-7.24.7.tgz", @@ -241,6 +395,20 @@ "node": ">=6.9.0" } }, + "node_modules/@babel/helper-skip-transparent-expression-wrappers": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-skip-transparent-expression-wrappers/-/helper-skip-transparent-expression-wrappers-7.24.7.tgz", + "integrity": "sha512-IO+DLT3LQUElMbpzlatRASEyQtfhSE0+m465v++3jyyXeBTBUjtVZg28/gHeV5mrTJqvEKhKroBGAvhW+qPHiQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/traverse": "^7.24.7", + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, "node_modules/@babel/helper-split-export-declaration": { "version": "7.24.7", "resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.24.7.tgz", @@ -284,6 +452,22 @@ "node": ">=6.9.0" } }, + "node_modules/@babel/helper-wrap-function": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-wrap-function/-/helper-wrap-function-7.24.7.tgz", + "integrity": "sha512-N9JIYk3TD+1vq/wn77YnJOqMtfWhNewNE+DJV4puD2X7Ew9J4JvrzrFDfTfyv5EgEXVy9/Wt8QiOErzEmv5Ifw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-function-name": "^7.24.7", + "@babel/template": "^7.24.7", + "@babel/traverse": "^7.24.7", + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, "node_modules/@babel/helpers": { "version": "7.24.7", "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.24.7.tgz", @@ -405,75 +589,1103 @@ "node": ">=6.0.0" } }, - "node_modules/@babel/plugin-syntax-async-generators": { - "version": "7.8.4", - "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-async-generators/-/plugin-syntax-async-generators-7.8.4.tgz", - "integrity": "sha512-tycmZxkGfZaxhMRbXlPXuVFpdWlXpir2W4AMhSJgRKzk/eDlIXOhb2LHWoLpDF7TEHylV5zNhykX6KAgHJmTNw==", + "node_modules/@babel/plugin-bugfix-firefox-class-in-computed-class-key": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-bugfix-firefox-class-in-computed-class-key/-/plugin-bugfix-firefox-class-in-computed-class-key-7.24.7.tgz", + "integrity": "sha512-TiT1ss81W80eQsN+722OaeQMY/G4yTb4G9JrqeiDADs3N8lbPMGldWi9x8tyqCW5NLx1Jh2AvkE6r6QvEltMMQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-environment-visitor": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0" + } + }, + "node_modules/@babel/plugin-bugfix-safari-id-destructuring-collision-in-function-expression": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-bugfix-safari-id-destructuring-collision-in-function-expression/-/plugin-bugfix-safari-id-destructuring-collision-in-function-expression-7.24.7.tgz", + "integrity": "sha512-unaQgZ/iRu/By6tsjMZzpeBZjChYfLYry6HrEXPoz3KmfF0sVBQ1l8zKMQ4xRGLWVsjuvB8nQfjNP/DcfEOCsg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0" + } + }, + "node_modules/@babel/plugin-bugfix-v8-spread-parameters-in-optional-chaining": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-bugfix-v8-spread-parameters-in-optional-chaining/-/plugin-bugfix-v8-spread-parameters-in-optional-chaining-7.24.7.tgz", + "integrity": "sha512-+izXIbke1T33mY4MSNnrqhPXDz01WYhEf3yF5NbnUtkiNnm+XBZJl3kNfoK6NKmYlz/D07+l2GWVK/QfDkNCuQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/helper-skip-transparent-expression-wrappers": "^7.24.7", + "@babel/plugin-transform-optional-chaining": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.13.0" + } + }, + "node_modules/@babel/plugin-bugfix-v8-static-class-fields-redefine-readonly": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-bugfix-v8-static-class-fields-redefine-readonly/-/plugin-bugfix-v8-static-class-fields-redefine-readonly-7.24.7.tgz", + "integrity": "sha512-utA4HuR6F4Vvcr+o4DnjL8fCOlgRFGbeeBEGNg3ZTrLFw6VWG5XmUrvcQ0FjIYMU2ST4XcR2Wsp7t9qOAPnxMg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-environment-visitor": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0" + } + }, + "node_modules/@babel/plugin-proposal-private-property-in-object": { + "version": "7.21.0-placeholder-for-preset-env.2", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-private-property-in-object/-/plugin-proposal-private-property-in-object-7.21.0-placeholder-for-preset-env.2.tgz", + "integrity": "sha512-SOSkfJDddaM7mak6cPEpswyTRnuRltl429hMraQEglW+OkovnCzsiszTmsrlY//qLFjCpQDFRvjdm2wA5pPm9w==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-async-generators": { + "version": "7.8.4", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-async-generators/-/plugin-syntax-async-generators-7.8.4.tgz", + "integrity": "sha512-tycmZxkGfZaxhMRbXlPXuVFpdWlXpir2W4AMhSJgRKzk/eDlIXOhb2LHWoLpDF7TEHylV5zNhykX6KAgHJmTNw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.8.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-bigint": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-bigint/-/plugin-syntax-bigint-7.8.3.tgz", + "integrity": "sha512-wnTnFlG+YxQm3vDxpGE57Pj0srRU4sHE/mDkt1qv2YJJSeUAec2ma4WLUnUPeKjyrfntVwe/N6dCXpU+zL3Npg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.8.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-class-properties": { + "version": "7.12.13", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-class-properties/-/plugin-syntax-class-properties-7.12.13.tgz", + "integrity": "sha512-fm4idjKla0YahUNgFNLCB0qySdsoPiZP3iQE3rky0mBUtMZ23yDJ9SJdg6dXTSDnulOVqiF3Hgr9nbXvXTQZYA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.12.13" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-class-static-block": { + "version": "7.14.5", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-class-static-block/-/plugin-syntax-class-static-block-7.14.5.tgz", + "integrity": "sha512-b+YyPmr6ldyNnM6sqYeMWE+bgJcJpO6yS4QD7ymxgH34GBPNDM/THBh8iunyvKIZztiwLH4CJZ0RxTk9emgpjw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.14.5" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-dynamic-import": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-dynamic-import/-/plugin-syntax-dynamic-import-7.8.3.tgz", + "integrity": "sha512-5gdGbFon+PszYzqs83S3E5mpi7/y/8M9eC90MRTZfduQOYW76ig6SOSPNe41IG5LoP3FGBn2N0RjVDSQiS94kQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.8.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-export-namespace-from": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-export-namespace-from/-/plugin-syntax-export-namespace-from-7.8.3.tgz", + "integrity": "sha512-MXf5laXo6c1IbEbegDmzGPwGNTsHZmEy6QGznu5Sh2UCWvueywb2ee+CCE4zQiZstxU9BMoQO9i6zUFSY0Kj0Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.8.3" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-import-assertions": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-import-assertions/-/plugin-syntax-import-assertions-7.24.7.tgz", + "integrity": "sha512-Ec3NRUMoi8gskrkBe3fNmEQfxDvY8bgfQpz6jlk/41kX9eUjvpyqWU7PBP/pLAvMaSQjbMNKJmvX57jP+M6bPg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-import-attributes": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-import-attributes/-/plugin-syntax-import-attributes-7.24.7.tgz", + "integrity": "sha512-hbX+lKKeUMGihnK8nvKqmXBInriT3GVjzXKFriV3YC6APGxMbP8RZNFwy91+hocLXq90Mta+HshoB31802bb8A==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-import-meta": { + "version": "7.10.4", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-import-meta/-/plugin-syntax-import-meta-7.10.4.tgz", + "integrity": "sha512-Yqfm+XDx0+Prh3VSeEQCPU81yC+JWZ2pDPFSS4ZdpfZhp4MkFMaDC1UqseovEKwSUpnIL7+vK+Clp7bfh0iD7g==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.10.4" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-json-strings": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-json-strings/-/plugin-syntax-json-strings-7.8.3.tgz", + "integrity": "sha512-lY6kdGpWHvjoe2vk4WrAapEuBR69EMxZl+RoGRhrFGNYVK8mOPAW8VfbT/ZgrFbXlDNiiaxQnAtgVCZ6jv30EA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.8.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-jsx": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-jsx/-/plugin-syntax-jsx-7.24.7.tgz", + "integrity": "sha512-6ddciUPe/mpMnOKv/U+RSd2vvVy+Yw/JfBB0ZHYjEZt9NLHmCUylNYlsbqCCS1Bffjlb0fCwC9Vqz+sBz6PsiQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-logical-assignment-operators": { + "version": "7.10.4", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-logical-assignment-operators/-/plugin-syntax-logical-assignment-operators-7.10.4.tgz", + "integrity": "sha512-d8waShlpFDinQ5MtvGU9xDAOzKH47+FFoney2baFIoMr952hKOLp1HR7VszoZvOsV/4+RRszNY7D17ba0te0ig==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.10.4" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-nullish-coalescing-operator": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-nullish-coalescing-operator/-/plugin-syntax-nullish-coalescing-operator-7.8.3.tgz", + "integrity": "sha512-aSff4zPII1u2QD7y+F8oDsz19ew4IGEJg9SVW+bqwpwtfFleiQDMdzA/R+UlWDzfnHFCxxleFT0PMIrR36XLNQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.8.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-numeric-separator": { + "version": "7.10.4", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-numeric-separator/-/plugin-syntax-numeric-separator-7.10.4.tgz", + "integrity": "sha512-9H6YdfkcK/uOnY/K7/aA2xpzaAgkQn37yzWUMRK7OaPOqOpGS1+n0H5hxT9AUw9EsSjPW8SVyMJwYRtWs3X3ug==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.10.4" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-object-rest-spread": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-object-rest-spread/-/plugin-syntax-object-rest-spread-7.8.3.tgz", + "integrity": "sha512-XoqMijGZb9y3y2XskN+P1wUGiVwWZ5JmoDRwx5+3GmEplNyVM2s2Dg8ILFQm8rWM48orGy5YpI5Bl8U1y7ydlA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.8.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-optional-catch-binding": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-optional-catch-binding/-/plugin-syntax-optional-catch-binding-7.8.3.tgz", + "integrity": "sha512-6VPD0Pc1lpTqw0aKoeRTMiB+kWhAoT24PA+ksWSBrFtl5SIRVpZlwN3NNPQjehA2E/91FV3RjLWoVTglWcSV3Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.8.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-optional-chaining": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-optional-chaining/-/plugin-syntax-optional-chaining-7.8.3.tgz", + "integrity": "sha512-KoK9ErH1MBlCPxV0VANkXW2/dw4vlbGDrFgz8bmUsBGYkFRcbRwMh6cIJubdPrkxRwuGdtCk0v/wPTKbQgBjkg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.8.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-private-property-in-object": { + "version": "7.14.5", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-private-property-in-object/-/plugin-syntax-private-property-in-object-7.14.5.tgz", + "integrity": "sha512-0wVnp9dxJ72ZUJDV27ZfbSj6iHLoytYZmh3rFcxNnvsJF3ktkzLDZPy/mA17HGsaQT3/DQsWYX1f1QGWkCoVUg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.14.5" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-top-level-await": { + "version": "7.14.5", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-top-level-await/-/plugin-syntax-top-level-await-7.14.5.tgz", + "integrity": "sha512-hx++upLv5U1rgYfwe1xBQUhRmU41NEvpUvrp8jkrSCdvGSnM5/qdRMtylJ6PG5OFkBaHkbTAKTnd3/YyESRHFw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.14.5" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-typescript": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-typescript/-/plugin-syntax-typescript-7.24.7.tgz", + "integrity": "sha512-c/+fVeJBB0FeKsFvwytYiUD+LBvhHjGSI0g446PRGdSVGZLRNArBUno2PETbAly3tpiNAQR5XaZ+JslxkotsbA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-unicode-sets-regex": { + "version": "7.18.6", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-unicode-sets-regex/-/plugin-syntax-unicode-sets-regex-7.18.6.tgz", + "integrity": "sha512-727YkEAPwSIQTv5im8QHz3upqp92JTWhidIC81Tdx4VJYIte/VndKf1qKrfnnhPLiPghStWfvC/iFaMCQu7Nqg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-create-regexp-features-plugin": "^7.18.6", + "@babel/helper-plugin-utils": "^7.18.6" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0" + } + }, + "node_modules/@babel/plugin-transform-arrow-functions": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-arrow-functions/-/plugin-transform-arrow-functions-7.24.7.tgz", + "integrity": "sha512-Dt9LQs6iEY++gXUwY03DNFat5C2NbO48jj+j/bSAz6b3HgPs39qcPiYt77fDObIcFwj3/C2ICX9YMwGflUoSHQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-async-generator-functions": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-async-generator-functions/-/plugin-transform-async-generator-functions-7.24.7.tgz", + "integrity": "sha512-o+iF77e3u7ZS4AoAuJvapz9Fm001PuD2V3Lp6OSE4FYQke+cSewYtnek+THqGRWyQloRCyvWL1OkyfNEl9vr/g==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-environment-visitor": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/helper-remap-async-to-generator": "^7.24.7", + "@babel/plugin-syntax-async-generators": "^7.8.4" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-async-to-generator": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-async-to-generator/-/plugin-transform-async-to-generator-7.24.7.tgz", + "integrity": "sha512-SQY01PcJfmQ+4Ash7NE+rpbLFbmqA2GPIgqzxfFTL4t1FKRq4zTms/7htKpoCUI9OcFYgzqfmCdH53s6/jn5fA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-module-imports": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/helper-remap-async-to-generator": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-block-scoped-functions": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-block-scoped-functions/-/plugin-transform-block-scoped-functions-7.24.7.tgz", + "integrity": "sha512-yO7RAz6EsVQDaBH18IDJcMB1HnrUn2FJ/Jslc/WtPPWcjhpUJXU/rjbwmluzp7v/ZzWcEhTMXELnnsz8djWDwQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-block-scoping": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-block-scoping/-/plugin-transform-block-scoping-7.24.7.tgz", + "integrity": "sha512-Nd5CvgMbWc+oWzBsuaMcbwjJWAcp5qzrbg69SZdHSP7AMY0AbWFqFO0WTFCA1jxhMCwodRwvRec8k0QUbZk7RQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-class-properties": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-class-properties/-/plugin-transform-class-properties-7.24.7.tgz", + "integrity": "sha512-vKbfawVYayKcSeSR5YYzzyXvsDFWU2mD8U5TFeXtbCPLFUqe7GyCgvO6XDHzje862ODrOwy6WCPmKeWHbCFJ4w==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-create-class-features-plugin": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-class-static-block": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-class-static-block/-/plugin-transform-class-static-block-7.24.7.tgz", + "integrity": "sha512-HMXK3WbBPpZQufbMG4B46A90PkuuhN9vBCb5T8+VAHqvAqvcLi+2cKoukcpmUYkszLhScU3l1iudhrks3DggRQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-create-class-features-plugin": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/plugin-syntax-class-static-block": "^7.14.5" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.12.0" + } + }, + "node_modules/@babel/plugin-transform-classes": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-classes/-/plugin-transform-classes-7.24.7.tgz", + "integrity": "sha512-CFbbBigp8ln4FU6Bpy6g7sE8B/WmCmzvivzUC6xDAdWVsjYTXijpuuGJmYkAaoWAzcItGKT3IOAbxRItZ5HTjw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-annotate-as-pure": "^7.24.7", + "@babel/helper-compilation-targets": "^7.24.7", + "@babel/helper-environment-visitor": "^7.24.7", + "@babel/helper-function-name": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/helper-replace-supers": "^7.24.7", + "@babel/helper-split-export-declaration": "^7.24.7", + "globals": "^11.1.0" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-classes/node_modules/globals": { + "version": "11.12.0", + "resolved": "https://registry.npmjs.org/globals/-/globals-11.12.0.tgz", + "integrity": "sha512-WOBp/EEGUiIsJSp7wcv/y6MO+lV9UoncWqxuFfm8eBwzWNgyfBd6Gz+IeKQ9jCmyhoH99g15M3T+QaVHFjizVA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=4" + } + }, + "node_modules/@babel/plugin-transform-computed-properties": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-computed-properties/-/plugin-transform-computed-properties-7.24.7.tgz", + "integrity": "sha512-25cS7v+707Gu6Ds2oY6tCkUwsJ9YIDbggd9+cu9jzzDgiNq7hR/8dkzxWfKWnTic26vsI3EsCXNd4iEB6e8esQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/template": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-destructuring": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-destructuring/-/plugin-transform-destructuring-7.24.7.tgz", + "integrity": "sha512-19eJO/8kdCQ9zISOf+SEUJM/bAUIsvY3YDnXZTupUCQ8LgrWnsG/gFB9dvXqdXnRXMAM8fvt7b0CBKQHNGy1mw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-dotall-regex": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-dotall-regex/-/plugin-transform-dotall-regex-7.24.7.tgz", + "integrity": "sha512-ZOA3W+1RRTSWvyqcMJDLqbchh7U4NRGqwRfFSVbOLS/ePIP4vHB5e8T8eXcuqyN1QkgKyj5wuW0lcS85v4CrSw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-create-regexp-features-plugin": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-duplicate-keys": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-duplicate-keys/-/plugin-transform-duplicate-keys-7.24.7.tgz", + "integrity": "sha512-JdYfXyCRihAe46jUIliuL2/s0x0wObgwwiGxw/UbgJBr20gQBThrokO4nYKgWkD7uBaqM7+9x5TU7NkExZJyzw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-dynamic-import": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-dynamic-import/-/plugin-transform-dynamic-import-7.24.7.tgz", + "integrity": "sha512-sc3X26PhZQDb3JhORmakcbvkeInvxz+A8oda99lj7J60QRuPZvNAk9wQlTBS1ZynelDrDmTU4pw1tyc5d5ZMUg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/plugin-syntax-dynamic-import": "^7.8.3" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-exponentiation-operator": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-exponentiation-operator/-/plugin-transform-exponentiation-operator-7.24.7.tgz", + "integrity": "sha512-Rqe/vSc9OYgDajNIK35u7ot+KeCoetqQYFXM4Epf7M7ez3lWlOjrDjrwMei6caCVhfdw+mIKD4cgdGNy5JQotQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-builder-binary-assignment-operator-visitor": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-export-namespace-from": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-export-namespace-from/-/plugin-transform-export-namespace-from-7.24.7.tgz", + "integrity": "sha512-v0K9uNYsPL3oXZ/7F9NNIbAj2jv1whUEtyA6aujhekLs56R++JDQuzRcP2/z4WX5Vg/c5lE9uWZA0/iUoFhLTA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/plugin-syntax-export-namespace-from": "^7.8.3" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-for-of": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-for-of/-/plugin-transform-for-of-7.24.7.tgz", + "integrity": "sha512-wo9ogrDG1ITTTBsy46oGiN1dS9A7MROBTcYsfS8DtsImMkHk9JXJ3EWQM6X2SUw4x80uGPlwj0o00Uoc6nEE3g==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/helper-skip-transparent-expression-wrappers": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-function-name": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-function-name/-/plugin-transform-function-name-7.24.7.tgz", + "integrity": "sha512-U9FcnA821YoILngSmYkW6FjyQe2TyZD5pHt4EVIhmcTkrJw/3KqcrRSxuOo5tFZJi7TE19iDyI1u+weTI7bn2w==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-compilation-targets": "^7.24.7", + "@babel/helper-function-name": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-json-strings": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-json-strings/-/plugin-transform-json-strings-7.24.7.tgz", + "integrity": "sha512-2yFnBGDvRuxAaE/f0vfBKvtnvvqU8tGpMHqMNpTN2oWMKIR3NqFkjaAgGwawhqK/pIN2T3XdjGPdaG0vDhOBGw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/plugin-syntax-json-strings": "^7.8.3" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-literals": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-literals/-/plugin-transform-literals-7.24.7.tgz", + "integrity": "sha512-vcwCbb4HDH+hWi8Pqenwnjy+UiklO4Kt1vfspcQYFhJdpthSnW8XvWGyDZWKNVrVbVViI/S7K9PDJZiUmP2fYQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-logical-assignment-operators": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-logical-assignment-operators/-/plugin-transform-logical-assignment-operators-7.24.7.tgz", + "integrity": "sha512-4D2tpwlQ1odXmTEIFWy9ELJcZHqrStlzK/dAOWYyxX3zT0iXQB6banjgeOJQXzEc4S0E0a5A+hahxPaEFYftsw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/plugin-syntax-logical-assignment-operators": "^7.10.4" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-member-expression-literals": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-member-expression-literals/-/plugin-transform-member-expression-literals-7.24.7.tgz", + "integrity": "sha512-T/hRC1uqrzXMKLQ6UCwMT85S3EvqaBXDGf0FaMf4446Qx9vKwlghvee0+uuZcDUCZU5RuNi4781UQ7R308zzBw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-modules-amd": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-modules-amd/-/plugin-transform-modules-amd-7.24.7.tgz", + "integrity": "sha512-9+pB1qxV3vs/8Hdmz/CulFB8w2tuu6EB94JZFsjdqxQokwGa9Unap7Bo2gGBGIvPmDIVvQrom7r5m/TCDMURhg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-module-transforms": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-modules-commonjs": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-modules-commonjs/-/plugin-transform-modules-commonjs-7.24.7.tgz", + "integrity": "sha512-iFI8GDxtevHJ/Z22J5xQpVqFLlMNstcLXh994xifFwxxGslr2ZXXLWgtBeLctOD63UFDArdvN6Tg8RFw+aEmjQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-module-transforms": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/helper-simple-access": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-modules-systemjs": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-modules-systemjs/-/plugin-transform-modules-systemjs-7.24.7.tgz", + "integrity": "sha512-GYQE0tW7YoaN13qFh3O1NCY4MPkUiAH3fiF7UcV/I3ajmDKEdG3l+UOcbAm4zUE3gnvUU+Eni7XrVKo9eO9auw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-hoist-variables": "^7.24.7", + "@babel/helper-module-transforms": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/helper-validator-identifier": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-modules-umd": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-modules-umd/-/plugin-transform-modules-umd-7.24.7.tgz", + "integrity": "sha512-3aytQvqJ/h9z4g8AsKPLvD4Zqi2qT+L3j7XoFFu1XBlZWEl2/1kWnhmAbxpLgPrHSY0M6UA02jyTiwUVtiKR6A==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-module-transforms": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-named-capturing-groups-regex": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-named-capturing-groups-regex/-/plugin-transform-named-capturing-groups-regex-7.24.7.tgz", + "integrity": "sha512-/jr7h/EWeJtk1U/uz2jlsCioHkZk1JJZVcc8oQsJ1dUlaJD83f4/6Zeh2aHt9BIFokHIsSeDfhUmju0+1GPd6g==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-create-regexp-features-plugin": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0" + } + }, + "node_modules/@babel/plugin-transform-new-target": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-new-target/-/plugin-transform-new-target-7.24.7.tgz", + "integrity": "sha512-RNKwfRIXg4Ls/8mMTza5oPF5RkOW8Wy/WgMAp1/F1yZ8mMbtwXW+HDoJiOsagWrAhI5f57Vncrmr9XeT4CVapA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-nullish-coalescing-operator": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-nullish-coalescing-operator/-/plugin-transform-nullish-coalescing-operator-7.24.7.tgz", + "integrity": "sha512-Ts7xQVk1OEocqzm8rHMXHlxvsfZ0cEF2yomUqpKENHWMF4zKk175Y4q8H5knJes6PgYad50uuRmt3UJuhBw8pQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/plugin-syntax-nullish-coalescing-operator": "^7.8.3" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-numeric-separator": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-numeric-separator/-/plugin-transform-numeric-separator-7.24.7.tgz", + "integrity": "sha512-e6q1TiVUzvH9KRvicuxdBTUj4AdKSRwzIyFFnfnezpCfP2/7Qmbb8qbU2j7GODbl4JMkblitCQjKYUaX/qkkwA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/plugin-syntax-numeric-separator": "^7.10.4" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-object-rest-spread": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-object-rest-spread/-/plugin-transform-object-rest-spread-7.24.7.tgz", + "integrity": "sha512-4QrHAr0aXQCEFni2q4DqKLD31n2DL+RxcwnNjDFkSG0eNQ/xCavnRkfCUjsyqGC2OviNJvZOF/mQqZBw7i2C5Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-compilation-targets": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/plugin-syntax-object-rest-spread": "^7.8.3", + "@babel/plugin-transform-parameters": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-object-super": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-object-super/-/plugin-transform-object-super-7.24.7.tgz", + "integrity": "sha512-A/vVLwN6lBrMFmMDmPPz0jnE6ZGx7Jq7d6sT/Ev4H65RER6pZ+kczlf1DthF5N0qaPHBsI7UXiE8Zy66nmAovg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/helper-replace-supers": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-optional-catch-binding": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-optional-catch-binding/-/plugin-transform-optional-catch-binding-7.24.7.tgz", + "integrity": "sha512-uLEndKqP5BfBbC/5jTwPxLh9kqPWWgzN/f8w6UwAIirAEqiIVJWWY312X72Eub09g5KF9+Zn7+hT7sDxmhRuKA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/plugin-syntax-optional-catch-binding": "^7.8.3" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-optional-chaining": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-optional-chaining/-/plugin-transform-optional-chaining-7.24.7.tgz", + "integrity": "sha512-tK+0N9yd4j+x/4hxF3F0e0fu/VdcxU18y5SevtyM/PCFlQvXbR0Zmlo2eBrKtVipGNFzpq56o8WsIIKcJFUCRQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/helper-skip-transparent-expression-wrappers": "^7.24.7", + "@babel/plugin-syntax-optional-chaining": "^7.8.3" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-parameters": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-parameters/-/plugin-transform-parameters-7.24.7.tgz", + "integrity": "sha512-yGWW5Rr+sQOhK0Ot8hjDJuxU3XLRQGflvT4lhlSY0DFvdb3TwKaY26CJzHtYllU0vT9j58hc37ndFPsqT1SrzA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-private-methods": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-private-methods/-/plugin-transform-private-methods-7.24.7.tgz", + "integrity": "sha512-COTCOkG2hn4JKGEKBADkA8WNb35TGkkRbI5iT845dB+NyqgO8Hn+ajPbSnIQznneJTa3d30scb6iz/DhH8GsJQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-create-class-features-plugin": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-private-property-in-object": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-private-property-in-object/-/plugin-transform-private-property-in-object-7.24.7.tgz", + "integrity": "sha512-9z76mxwnwFxMyxZWEgdgECQglF2Q7cFLm0kMf8pGwt+GSJsY0cONKj/UuO4bOH0w/uAel3ekS4ra5CEAyJRmDA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-annotate-as-pure": "^7.24.7", + "@babel/helper-create-class-features-plugin": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/plugin-syntax-private-property-in-object": "^7.14.5" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-property-literals": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-property-literals/-/plugin-transform-property-literals-7.24.7.tgz", + "integrity": "sha512-EMi4MLQSHfd2nrCqQEWxFdha2gBCqU4ZcCng4WBGZ5CJL4bBRW0ptdqqDdeirGZcpALazVVNJqRmsO8/+oNCBA==", "dev": true, "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.8.0" + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" }, "peerDependencies": { "@babel/core": "^7.0.0-0" } }, - "node_modules/@babel/plugin-syntax-bigint": { - "version": "7.8.3", - "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-bigint/-/plugin-syntax-bigint-7.8.3.tgz", - "integrity": "sha512-wnTnFlG+YxQm3vDxpGE57Pj0srRU4sHE/mDkt1qv2YJJSeUAec2ma4WLUnUPeKjyrfntVwe/N6dCXpU+zL3Npg==", + "node_modules/@babel/plugin-transform-regenerator": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-regenerator/-/plugin-transform-regenerator-7.24.7.tgz", + "integrity": "sha512-lq3fvXPdimDrlg6LWBoqj+r/DEWgONuwjuOuQCSYgRroXDH/IdM1C0IZf59fL5cHLpjEH/O6opIRBbqv7ELnuA==", "dev": true, "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.8.0" + "@babel/helper-plugin-utils": "^7.24.7", + "regenerator-transform": "^0.15.2" + }, + "engines": { + "node": ">=6.9.0" }, "peerDependencies": { "@babel/core": "^7.0.0-0" } }, - "node_modules/@babel/plugin-syntax-class-properties": { - "version": "7.12.13", - "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-class-properties/-/plugin-syntax-class-properties-7.12.13.tgz", - "integrity": "sha512-fm4idjKla0YahUNgFNLCB0qySdsoPiZP3iQE3rky0mBUtMZ23yDJ9SJdg6dXTSDnulOVqiF3Hgr9nbXvXTQZYA==", + "node_modules/@babel/plugin-transform-reserved-words": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-reserved-words/-/plugin-transform-reserved-words-7.24.7.tgz", + "integrity": "sha512-0DUq0pHcPKbjFZCfTss/pGkYMfy3vFWydkUBd9r0GHpIyfs2eCDENvqadMycRS9wZCXR41wucAfJHJmwA0UmoQ==", "dev": true, "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.12.13" + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" }, "peerDependencies": { "@babel/core": "^7.0.0-0" } }, - "node_modules/@babel/plugin-syntax-import-meta": { - "version": "7.10.4", - "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-import-meta/-/plugin-syntax-import-meta-7.10.4.tgz", - "integrity": "sha512-Yqfm+XDx0+Prh3VSeEQCPU81yC+JWZ2pDPFSS4ZdpfZhp4MkFMaDC1UqseovEKwSUpnIL7+vK+Clp7bfh0iD7g==", + "node_modules/@babel/plugin-transform-shorthand-properties": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-shorthand-properties/-/plugin-transform-shorthand-properties-7.24.7.tgz", + "integrity": "sha512-KsDsevZMDsigzbA09+vacnLpmPH4aWjcZjXdyFKGzpplxhbeB4wYtury3vglQkg6KM/xEPKt73eCjPPf1PgXBA==", "dev": true, "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.10.4" + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" }, "peerDependencies": { "@babel/core": "^7.0.0-0" } }, - "node_modules/@babel/plugin-syntax-json-strings": { - "version": "7.8.3", - "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-json-strings/-/plugin-syntax-json-strings-7.8.3.tgz", - "integrity": "sha512-lY6kdGpWHvjoe2vk4WrAapEuBR69EMxZl+RoGRhrFGNYVK8mOPAW8VfbT/ZgrFbXlDNiiaxQnAtgVCZ6jv30EA==", + "node_modules/@babel/plugin-transform-spread": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-spread/-/plugin-transform-spread-7.24.7.tgz", + "integrity": "sha512-x96oO0I09dgMDxJaANcRyD4ellXFLLiWhuwDxKZX5g2rWP1bTPkBSwCYv96VDXVT1bD9aPj8tppr5ITIh8hBng==", "dev": true, "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.8.0" + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/helper-skip-transparent-expression-wrappers": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" }, "peerDependencies": { "@babel/core": "^7.0.0-0" } }, - "node_modules/@babel/plugin-syntax-jsx": { + "node_modules/@babel/plugin-transform-sticky-regex": { "version": "7.24.7", - "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-jsx/-/plugin-syntax-jsx-7.24.7.tgz", - "integrity": "sha512-6ddciUPe/mpMnOKv/U+RSd2vvVy+Yw/JfBB0ZHYjEZt9NLHmCUylNYlsbqCCS1Bffjlb0fCwC9Vqz+sBz6PsiQ==", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-sticky-regex/-/plugin-transform-sticky-regex-7.24.7.tgz", + "integrity": "sha512-kHPSIJc9v24zEml5geKg9Mjx5ULpfncj0wRpYtxbvKyTtHCYDkVE3aHQ03FrpEo4gEe2vrJJS1Y9CJTaThA52g==", "dev": true, "license": "MIT", "dependencies": { @@ -486,92 +1698,193 @@ "@babel/core": "^7.0.0-0" } }, - "node_modules/@babel/plugin-syntax-logical-assignment-operators": { - "version": "7.10.4", - "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-logical-assignment-operators/-/plugin-syntax-logical-assignment-operators-7.10.4.tgz", - "integrity": "sha512-d8waShlpFDinQ5MtvGU9xDAOzKH47+FFoney2baFIoMr952hKOLp1HR7VszoZvOsV/4+RRszNY7D17ba0te0ig==", + "node_modules/@babel/plugin-transform-template-literals": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-template-literals/-/plugin-transform-template-literals-7.24.7.tgz", + "integrity": "sha512-AfDTQmClklHCOLxtGoP7HkeMw56k1/bTQjwsfhL6pppo/M4TOBSq+jjBUBLmV/4oeFg4GWMavIl44ZeCtmmZTw==", "dev": true, "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.10.4" + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" }, "peerDependencies": { "@babel/core": "^7.0.0-0" } }, - "node_modules/@babel/plugin-syntax-nullish-coalescing-operator": { - "version": "7.8.3", - "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-nullish-coalescing-operator/-/plugin-syntax-nullish-coalescing-operator-7.8.3.tgz", - "integrity": "sha512-aSff4zPII1u2QD7y+F8oDsz19ew4IGEJg9SVW+bqwpwtfFleiQDMdzA/R+UlWDzfnHFCxxleFT0PMIrR36XLNQ==", + "node_modules/@babel/plugin-transform-typeof-symbol": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-typeof-symbol/-/plugin-transform-typeof-symbol-7.24.7.tgz", + "integrity": "sha512-VtR8hDy7YLB7+Pet9IarXjg/zgCMSF+1mNS/EQEiEaUPoFXCVsHG64SIxcaaI2zJgRiv+YmgaQESUfWAdbjzgg==", "dev": true, "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.8.0" + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" }, "peerDependencies": { "@babel/core": "^7.0.0-0" } }, - "node_modules/@babel/plugin-syntax-numeric-separator": { - "version": "7.10.4", - "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-numeric-separator/-/plugin-syntax-numeric-separator-7.10.4.tgz", - "integrity": "sha512-9H6YdfkcK/uOnY/K7/aA2xpzaAgkQn37yzWUMRK7OaPOqOpGS1+n0H5hxT9AUw9EsSjPW8SVyMJwYRtWs3X3ug==", + "node_modules/@babel/plugin-transform-unicode-escapes": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-unicode-escapes/-/plugin-transform-unicode-escapes-7.24.7.tgz", + "integrity": "sha512-U3ap1gm5+4edc2Q/P+9VrBNhGkfnf+8ZqppY71Bo/pzZmXhhLdqgaUl6cuB07O1+AQJtCLfaOmswiNbSQ9ivhw==", "dev": true, "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.10.4" + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" }, "peerDependencies": { "@babel/core": "^7.0.0-0" } }, - "node_modules/@babel/plugin-syntax-object-rest-spread": { - "version": "7.8.3", - "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-object-rest-spread/-/plugin-syntax-object-rest-spread-7.8.3.tgz", - "integrity": "sha512-XoqMijGZb9y3y2XskN+P1wUGiVwWZ5JmoDRwx5+3GmEplNyVM2s2Dg8ILFQm8rWM48orGy5YpI5Bl8U1y7ydlA==", + "node_modules/@babel/plugin-transform-unicode-property-regex": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-unicode-property-regex/-/plugin-transform-unicode-property-regex-7.24.7.tgz", + "integrity": "sha512-uH2O4OV5M9FZYQrwc7NdVmMxQJOCCzFeYudlZSzUAHRFeOujQefa92E74TQDVskNHCzOXoigEuoyzHDhaEaK5w==", "dev": true, "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.8.0" + "@babel/helper-create-regexp-features-plugin": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" }, "peerDependencies": { "@babel/core": "^7.0.0-0" } }, - "node_modules/@babel/plugin-syntax-optional-catch-binding": { - "version": "7.8.3", - "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-optional-catch-binding/-/plugin-syntax-optional-catch-binding-7.8.3.tgz", - "integrity": "sha512-6VPD0Pc1lpTqw0aKoeRTMiB+kWhAoT24PA+ksWSBrFtl5SIRVpZlwN3NNPQjehA2E/91FV3RjLWoVTglWcSV3Q==", + "node_modules/@babel/plugin-transform-unicode-regex": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-unicode-regex/-/plugin-transform-unicode-regex-7.24.7.tgz", + "integrity": "sha512-hlQ96MBZSAXUq7ltkjtu3FJCCSMx/j629ns3hA3pXnBXjanNP0LHi+JpPeA81zaWgVK1VGH95Xuy7u0RyQ8kMg==", "dev": true, "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.8.0" + "@babel/helper-create-regexp-features-plugin": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" }, "peerDependencies": { "@babel/core": "^7.0.0-0" } }, - "node_modules/@babel/plugin-syntax-optional-chaining": { - "version": "7.8.3", - "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-optional-chaining/-/plugin-syntax-optional-chaining-7.8.3.tgz", - "integrity": "sha512-KoK9ErH1MBlCPxV0VANkXW2/dw4vlbGDrFgz8bmUsBGYkFRcbRwMh6cIJubdPrkxRwuGdtCk0v/wPTKbQgBjkg==", + "node_modules/@babel/plugin-transform-unicode-sets-regex": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-unicode-sets-regex/-/plugin-transform-unicode-sets-regex-7.24.7.tgz", + "integrity": "sha512-2G8aAvF4wy1w/AGZkemprdGMRg5o6zPNhbHVImRz3lss55TYCBd6xStN19rt8XJHq20sqV0JbyWjOWwQRwV/wg==", "dev": true, "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.8.0" + "@babel/helper-create-regexp-features-plugin": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" }, "peerDependencies": { - "@babel/core": "^7.0.0-0" + "@babel/core": "^7.0.0" } }, - "node_modules/@babel/plugin-syntax-top-level-await": { - "version": "7.14.5", - "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-top-level-await/-/plugin-syntax-top-level-await-7.14.5.tgz", - "integrity": "sha512-hx++upLv5U1rgYfwe1xBQUhRmU41NEvpUvrp8jkrSCdvGSnM5/qdRMtylJ6PG5OFkBaHkbTAKTnd3/YyESRHFw==", + "node_modules/@babel/preset-env": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/preset-env/-/preset-env-7.24.7.tgz", + "integrity": "sha512-1YZNsc+y6cTvWlDHidMBsQZrZfEFjRIo/BZCT906PMdzOyXtSLTgqGdrpcuTDCXyd11Am5uQULtDIcCfnTc8fQ==", "dev": true, "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.14.5" + "@babel/compat-data": "^7.24.7", + "@babel/helper-compilation-targets": "^7.24.7", + "@babel/helper-plugin-utils": "^7.24.7", + "@babel/helper-validator-option": "^7.24.7", + "@babel/plugin-bugfix-firefox-class-in-computed-class-key": "^7.24.7", + "@babel/plugin-bugfix-safari-id-destructuring-collision-in-function-expression": "^7.24.7", + "@babel/plugin-bugfix-v8-spread-parameters-in-optional-chaining": "^7.24.7", + "@babel/plugin-bugfix-v8-static-class-fields-redefine-readonly": "^7.24.7", + "@babel/plugin-proposal-private-property-in-object": "7.21.0-placeholder-for-preset-env.2", + "@babel/plugin-syntax-async-generators": "^7.8.4", + "@babel/plugin-syntax-class-properties": "^7.12.13", + "@babel/plugin-syntax-class-static-block": "^7.14.5", + "@babel/plugin-syntax-dynamic-import": "^7.8.3", + "@babel/plugin-syntax-export-namespace-from": "^7.8.3", + "@babel/plugin-syntax-import-assertions": "^7.24.7", + "@babel/plugin-syntax-import-attributes": "^7.24.7", + "@babel/plugin-syntax-import-meta": "^7.10.4", + "@babel/plugin-syntax-json-strings": "^7.8.3", + "@babel/plugin-syntax-logical-assignment-operators": "^7.10.4", + "@babel/plugin-syntax-nullish-coalescing-operator": "^7.8.3", + "@babel/plugin-syntax-numeric-separator": "^7.10.4", + "@babel/plugin-syntax-object-rest-spread": "^7.8.3", + "@babel/plugin-syntax-optional-catch-binding": "^7.8.3", + "@babel/plugin-syntax-optional-chaining": "^7.8.3", + "@babel/plugin-syntax-private-property-in-object": "^7.14.5", + "@babel/plugin-syntax-top-level-await": "^7.14.5", + "@babel/plugin-syntax-unicode-sets-regex": "^7.18.6", + "@babel/plugin-transform-arrow-functions": "^7.24.7", + "@babel/plugin-transform-async-generator-functions": "^7.24.7", + "@babel/plugin-transform-async-to-generator": "^7.24.7", + "@babel/plugin-transform-block-scoped-functions": "^7.24.7", + "@babel/plugin-transform-block-scoping": "^7.24.7", + "@babel/plugin-transform-class-properties": "^7.24.7", + "@babel/plugin-transform-class-static-block": "^7.24.7", + "@babel/plugin-transform-classes": "^7.24.7", + "@babel/plugin-transform-computed-properties": "^7.24.7", + "@babel/plugin-transform-destructuring": "^7.24.7", + "@babel/plugin-transform-dotall-regex": "^7.24.7", + "@babel/plugin-transform-duplicate-keys": "^7.24.7", + "@babel/plugin-transform-dynamic-import": "^7.24.7", + "@babel/plugin-transform-exponentiation-operator": "^7.24.7", + "@babel/plugin-transform-export-namespace-from": "^7.24.7", + "@babel/plugin-transform-for-of": "^7.24.7", + "@babel/plugin-transform-function-name": "^7.24.7", + "@babel/plugin-transform-json-strings": "^7.24.7", + "@babel/plugin-transform-literals": "^7.24.7", + "@babel/plugin-transform-logical-assignment-operators": "^7.24.7", + "@babel/plugin-transform-member-expression-literals": "^7.24.7", + "@babel/plugin-transform-modules-amd": "^7.24.7", + "@babel/plugin-transform-modules-commonjs": "^7.24.7", + "@babel/plugin-transform-modules-systemjs": "^7.24.7", + "@babel/plugin-transform-modules-umd": "^7.24.7", + "@babel/plugin-transform-named-capturing-groups-regex": "^7.24.7", + "@babel/plugin-transform-new-target": "^7.24.7", + "@babel/plugin-transform-nullish-coalescing-operator": "^7.24.7", + "@babel/plugin-transform-numeric-separator": "^7.24.7", + "@babel/plugin-transform-object-rest-spread": "^7.24.7", + "@babel/plugin-transform-object-super": "^7.24.7", + "@babel/plugin-transform-optional-catch-binding": "^7.24.7", + "@babel/plugin-transform-optional-chaining": "^7.24.7", + "@babel/plugin-transform-parameters": "^7.24.7", + "@babel/plugin-transform-private-methods": "^7.24.7", + "@babel/plugin-transform-private-property-in-object": "^7.24.7", + "@babel/plugin-transform-property-literals": "^7.24.7", + "@babel/plugin-transform-regenerator": "^7.24.7", + "@babel/plugin-transform-reserved-words": "^7.24.7", + "@babel/plugin-transform-shorthand-properties": "^7.24.7", + "@babel/plugin-transform-spread": "^7.24.7", + "@babel/plugin-transform-sticky-regex": "^7.24.7", + "@babel/plugin-transform-template-literals": "^7.24.7", + "@babel/plugin-transform-typeof-symbol": "^7.24.7", + "@babel/plugin-transform-unicode-escapes": "^7.24.7", + "@babel/plugin-transform-unicode-property-regex": "^7.24.7", + "@babel/plugin-transform-unicode-regex": "^7.24.7", + "@babel/plugin-transform-unicode-sets-regex": "^7.24.7", + "@babel/preset-modules": "0.1.6-no-external-plugins", + "babel-plugin-polyfill-corejs2": "^0.4.10", + "babel-plugin-polyfill-corejs3": "^0.10.4", + "babel-plugin-polyfill-regenerator": "^0.6.1", + "core-js-compat": "^3.31.0", + "semver": "^6.3.1" }, "engines": { "node": ">=6.9.0" @@ -580,20 +1893,39 @@ "@babel/core": "^7.0.0-0" } }, - "node_modules/@babel/plugin-syntax-typescript": { + "node_modules/@babel/preset-modules": { + "version": "0.1.6-no-external-plugins", + "resolved": "https://registry.npmjs.org/@babel/preset-modules/-/preset-modules-0.1.6-no-external-plugins.tgz", + "integrity": "sha512-HrcgcIESLm9aIR842yhJ5RWan/gebQUJ6E/E5+rf0y9o6oj7w0Br+sWuL6kEQ/o/AdfvR1Je9jG18/gnpwjEyA==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-plugin-utils": "^7.0.0", + "@babel/types": "^7.4.4", + "esutils": "^2.0.2" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0 || ^8.0.0-0 <8.0.0" + } + }, + "node_modules/@babel/regjsgen": { + "version": "0.8.0", + "resolved": "https://registry.npmjs.org/@babel/regjsgen/-/regjsgen-0.8.0.tgz", + "integrity": "sha512-x/rqGMdzj+fWZvCOYForTghzbtqPDZ5gPwaoNGHdgDfF2QA/XZbCBp4Moo5scrkAMPhB7z26XM/AaHuIJdgauA==", + "dev": true, + "license": "MIT" + }, + "node_modules/@babel/runtime": { "version": "7.24.7", - "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-typescript/-/plugin-syntax-typescript-7.24.7.tgz", - "integrity": "sha512-c/+fVeJBB0FeKsFvwytYiUD+LBvhHjGSI0g446PRGdSVGZLRNArBUno2PETbAly3tpiNAQR5XaZ+JslxkotsbA==", + "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.24.7.tgz", + "integrity": "sha512-UwgBRMjJP+xv857DCngvqXI3Iq6J4v0wXmwc6sapg+zyhbwmQX67LUEFrkK5tbyJ30jGuG3ZvWpBiB9LCy1kWw==", "dev": true, "license": "MIT", "dependencies": { - "@babel/helper-plugin-utils": "^7.24.7" + "regenerator-runtime": "^0.14.0" }, "engines": { "node": ">=6.9.0" - }, - "peerDependencies": { - "@babel/core": "^7.0.0-0" } }, "node_modules/@babel/template": { @@ -1613,6 +2945,48 @@ "node": "^14.15.0 || ^16.10.0 || >=18.0.0" } }, + "node_modules/babel-plugin-polyfill-corejs2": { + "version": "0.4.11", + "resolved": "https://registry.npmjs.org/babel-plugin-polyfill-corejs2/-/babel-plugin-polyfill-corejs2-0.4.11.tgz", + "integrity": "sha512-sMEJ27L0gRHShOh5G54uAAPaiCOygY/5ratXuiyb2G46FmlSpc9eFCzYVyDiPxfNbwzA7mYahmjQc5q+CZQ09Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/compat-data": "^7.22.6", + "@babel/helper-define-polyfill-provider": "^0.6.2", + "semver": "^6.3.1" + }, + "peerDependencies": { + "@babel/core": "^7.4.0 || ^8.0.0-0 <8.0.0" + } + }, + "node_modules/babel-plugin-polyfill-corejs3": { + "version": "0.10.4", + "resolved": "https://registry.npmjs.org/babel-plugin-polyfill-corejs3/-/babel-plugin-polyfill-corejs3-0.10.4.tgz", + "integrity": "sha512-25J6I8NGfa5YkCDogHRID3fVCadIR8/pGl1/spvCkzb6lVn6SR3ojpx9nOn9iEBcUsjY24AmdKm5khcfKdylcg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-define-polyfill-provider": "^0.6.1", + "core-js-compat": "^3.36.1" + }, + "peerDependencies": { + "@babel/core": "^7.4.0 || ^8.0.0-0 <8.0.0" + } + }, + "node_modules/babel-plugin-polyfill-regenerator": { + "version": "0.6.2", + "resolved": "https://registry.npmjs.org/babel-plugin-polyfill-regenerator/-/babel-plugin-polyfill-regenerator-0.6.2.tgz", + "integrity": "sha512-2R25rQZWP63nGwaAswvDazbPXfrM3HwVoBXK6HcqeKrSrL/JqcC/rDcf95l4r7LXLyxDXc8uQDa064GubtCABg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/helper-define-polyfill-provider": "^0.6.2" + }, + "peerDependencies": { + "@babel/core": "^7.4.0 || ^8.0.0-0 <8.0.0" + } + }, "node_modules/babel-preset-current-node-syntax": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/babel-preset-current-node-syntax/-/babel-preset-current-node-syntax-1.0.1.tgz", @@ -1905,6 +3279,20 @@ "dev": true, "license": "MIT" }, + "node_modules/core-js-compat": { + "version": "3.37.1", + "resolved": "https://registry.npmjs.org/core-js-compat/-/core-js-compat-3.37.1.tgz", + "integrity": "sha512-9TNiImhKvQqSUkOvk/mMRZzOANTiEVC7WaBNhHcKM7x+/5E1l5NvsysR19zuDQScE8k+kfQXWRN3AtS/eOSHpg==", + "dev": true, + "license": "MIT", + "dependencies": { + "browserslist": "^4.23.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/core-js" + } + }, "node_modules/create-jest": { "version": "29.7.0", "resolved": "https://registry.npmjs.org/create-jest/-/create-jest-29.7.0.tgz", @@ -3692,6 +5080,15 @@ "node": ">=6" } }, + "node_modules/jsonrepair": { + "version": "3.8.0", + "resolved": "https://registry.npmjs.org/jsonrepair/-/jsonrepair-3.8.0.tgz", + "integrity": "sha512-89lrxpwp+IEcJ6kwglF0HH3Tl17J08JEpYfXnvvjdp4zV4rjSoGu2NdQHxBs7yTOk3ETjTn9du48pBy8iBqj1w==", + "license": "ISC", + "bin": { + "jsonrepair": "bin/cli.js" + } + }, "node_modules/keyv": { "version": "4.5.4", "resolved": "https://registry.npmjs.org/keyv/-/keyv-4.5.4.tgz", @@ -3755,6 +5152,13 @@ "node": ">=8" } }, + "node_modules/lodash.debounce": { + "version": "4.0.8", + "resolved": "https://registry.npmjs.org/lodash.debounce/-/lodash.debounce-4.0.8.tgz", + "integrity": "sha512-FT1yDzDYEoYWhnSGnpE/4Kj1fLZkDFyqRb7fNt6FdYOSxlUWAtp42Eh6Wb0rGIv/m9Bgo7x4GhQbm5Ys4SG5ow==", + "dev": true, + "license": "MIT" + }, "node_modules/lodash.merge": { "version": "4.6.2", "resolved": "https://registry.npmjs.org/lodash.merge/-/lodash.merge-4.6.2.tgz", @@ -4336,6 +5740,83 @@ "dev": true, "license": "MIT" }, + "node_modules/regenerate": { + "version": "1.4.2", + "resolved": "https://registry.npmjs.org/regenerate/-/regenerate-1.4.2.tgz", + "integrity": "sha512-zrceR/XhGYU/d/opr2EKO7aRHUeiBI8qjtfHqADTwZd6Szfy16la6kqD0MIUs5z5hx6AaKa+PixpPrR289+I0A==", + "dev": true, + "license": "MIT" + }, + "node_modules/regenerate-unicode-properties": { + "version": "10.1.1", + "resolved": "https://registry.npmjs.org/regenerate-unicode-properties/-/regenerate-unicode-properties-10.1.1.tgz", + "integrity": "sha512-X007RyZLsCJVVrjgEFVpLUTZwyOZk3oiL75ZcuYjlIWd6rNJtOjkBwQc5AsRrpbKVkxN6sklw/k/9m2jJYOf8Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "regenerate": "^1.4.2" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/regenerator-runtime": { + "version": "0.14.1", + "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.14.1.tgz", + "integrity": "sha512-dYnhHh0nJoMfnkZs6GmmhFknAGRrLznOu5nc9ML+EJxGvrx6H7teuevqVqCuPcPK//3eDrrjQhehXVx9cnkGdw==", + "dev": true, + "license": "MIT" + }, + "node_modules/regenerator-transform": { + "version": "0.15.2", + "resolved": "https://registry.npmjs.org/regenerator-transform/-/regenerator-transform-0.15.2.tgz", + "integrity": "sha512-hfMp2BoF0qOk3uc5V20ALGDS2ddjQaLrdl7xrGXvAIow7qeWRM2VA2HuCHkUKk9slq3VwEwLNK3DFBqDfPGYtg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/runtime": "^7.8.4" + } + }, + "node_modules/regexpu-core": { + "version": "5.3.2", + "resolved": "https://registry.npmjs.org/regexpu-core/-/regexpu-core-5.3.2.tgz", + "integrity": "sha512-RAM5FlZz+Lhmo7db9L298p2vHP5ZywrVXmVXpmAD9GuL5MPH6t9ROw1iA/wfHkQ76Qe7AaPF0nGuim96/IrQMQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@babel/regjsgen": "^0.8.0", + "regenerate": "^1.4.2", + "regenerate-unicode-properties": "^10.1.0", + "regjsparser": "^0.9.1", + "unicode-match-property-ecmascript": "^2.0.0", + "unicode-match-property-value-ecmascript": "^2.1.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/regjsparser": { + "version": "0.9.1", + "resolved": "https://registry.npmjs.org/regjsparser/-/regjsparser-0.9.1.tgz", + "integrity": "sha512-dQUtn90WanSNl+7mQKcXAgZxvUe7Z0SqXlgzv0za4LwiUhyzBC58yQO3liFoUgu8GiJVInAhJjkj1N0EtQ5nkQ==", + "dev": true, + "license": "BSD-2-Clause", + "dependencies": { + "jsesc": "~0.5.0" + }, + "bin": { + "regjsparser": "bin/parser" + } + }, + "node_modules/regjsparser/node_modules/jsesc": { + "version": "0.5.0", + "resolved": "https://registry.npmjs.org/jsesc/-/jsesc-0.5.0.tgz", + "integrity": "sha512-uZz5UnB7u4T9LvwmFqXii7pZSouaRPorGs5who1Ip7VO0wxanFvBL7GkM6dTHlgX+jhBApRetaWpnDabOeTcnA==", + "dev": true, + "bin": { + "jsesc": "bin/jsesc" + } + }, "node_modules/require-directory": { "version": "2.1.1", "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz", @@ -4731,6 +6212,50 @@ "integrity": "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA==", "license": "MIT" }, + "node_modules/unicode-canonical-property-names-ecmascript": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/unicode-canonical-property-names-ecmascript/-/unicode-canonical-property-names-ecmascript-2.0.0.tgz", + "integrity": "sha512-yY5PpDlfVIU5+y/BSCxAJRBIS1Zc2dDG3Ujq+sR0U+JjUevW2JhocOF+soROYDSaAezOzOKuyyixhD6mBknSmQ==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=4" + } + }, + "node_modules/unicode-match-property-ecmascript": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/unicode-match-property-ecmascript/-/unicode-match-property-ecmascript-2.0.0.tgz", + "integrity": "sha512-5kaZCrbp5mmbz5ulBkDkbY0SsPOjKqVS35VpL9ulMPfSl0J0Xsm+9Evphv9CoIZFwre7aJoa94AY6seMKGVN5Q==", + "dev": true, + "license": "MIT", + "dependencies": { + "unicode-canonical-property-names-ecmascript": "^2.0.0", + "unicode-property-aliases-ecmascript": "^2.0.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/unicode-match-property-value-ecmascript": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/unicode-match-property-value-ecmascript/-/unicode-match-property-value-ecmascript-2.1.0.tgz", + "integrity": "sha512-qxkjQt6qjg/mYscYMC0XKRn3Rh0wFPlfxB0xkt9CfyTvpX1Ra0+rAmdX2QyAobptSEvuy4RtpPRui6XkV+8wjA==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=4" + } + }, + "node_modules/unicode-property-aliases-ecmascript": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/unicode-property-aliases-ecmascript/-/unicode-property-aliases-ecmascript-2.1.0.tgz", + "integrity": "sha512-6t3foTQI9qne+OZoVQB/8x8rk2k1eVy1gRXhV3oFQ5T6R1dqQ1xtin3XqSlx3+ATBkliTaR/hHyJBm+LVPNM8w==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=4" + } + }, "node_modules/update-browserslist-db": { "version": "1.0.16", "resolved": "https://registry.npmjs.org/update-browserslist-db/-/update-browserslist-db-1.0.16.tgz", diff --git a/package.json b/package.json index ec2731f..d597b7b 100644 --- a/package.json +++ b/package.json @@ -1,8 +1,8 @@ { "name": "llm-interface", - "version": "1.0.1", + "version": "2.0.0", "main": "src/index.js", - "description": "A simple, unified interface for integrating and interacting with multiple Large Language Model (LLM) APIs, including OpenAI, AI21 Studio, Anthropic, Cohere, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Perplexity, Reka AI, and LLaMA.cpp.", + "description": "A simple, unified interface for integrating and interacting with multiple Large Language Model (LLM) APIs, including OpenAI, AI21 Studio, Anthropic, Cloudflare AI, Cohere, Fireworks AI, Google Gemini, Goose AI, Groq, Hugging Face, Mistral AI, Perplexity, Reka AI, and LLaMA.cpp.", "type": "commonjs", "scripts": { "test": "jest" @@ -18,7 +18,10 @@ "llamacpp", "openai", "chatgpt", - "reka" + "reka", + "rekaai", + "cloudflare", + "cloudflareai" ], "author": "Sam Estrin", "license": "MIT", @@ -37,11 +40,16 @@ "dotenv": "^16.4.5", "flat-cache": "^5.0.0", "groq-sdk": "^0.5.0", + "jsonrepair": "^3.8.0", "loglevel": "^1.9.1", "openai": "^4.52.0" }, "devDependencies": { + "@babel/core": "^7.24.7", + "@babel/plugin-syntax-dynamic-import": "^7.8.3", + "@babel/preset-env": "^7.24.7", "@eslint/js": "^9.5.0", + "babel-jest": "^29.7.0", "eslint": "^9.5.0", "globals": "^15.6.0", "jest": "^29.7.0", diff --git a/src/cache/llm-interface-cache b/src/cache/llm-interface-cache deleted file mode 100644 index d291edc..0000000 --- a/src/cache/llm-interface-cache +++ /dev/null @@ -1 +0,0 @@ -[{"96952b48e4387827ab20309155c3d7c6":"1","465cb21dfc1cd1de336c62654be347f1":"2"},"## The Importance of Low Latency LLMs\n\nLow latency Large Language Models (LLMs) are crucial for numerous applications, as they enable **real-time interactions** and **seamless integration** with various systems. Here's why their importance is growing:\n\n**1. Enhanced User Experience:**\n\n* **Interactive Applications:** Imagine a chatbot that responds instantly to your queries, or a real-time translation tool that seamlessly translates your spoken words. Low latency LLMs make these applications possible, providing a","## The Importance of Low Latency LLMs\n\nLow latency Large Language Models (LLMs) are crucial for several applications, bringing significant advantages over traditional high-latency models. Here's why:\n\n**1. Real-Time Interactions:**\n\n* **Conversational AI:** Low latency enables smooth, natural conversations with AI chatbots. Users expect immediate responses, and delays can disrupt the flow and feel unnatural.\n* **Real-Time Translation:** Low latency translates languages instantly, making communication seamless in multilingual environments.\n* **Interactive Gaming:** Low latency allows for AI characters to respond and react realistically, enhancing the gaming experience.\n\n**2. Enhanced User Experience:**\n\n* **Faster Responses:** Users appreciate quick responses, whether it's receiving search results, getting answers to questions, or interacting with AI tools.\n* **Increased Productivity:** Reduced latency allows users to accomplish tasks more efficiently, without waiting for AI responses.\n* **Improved Accessibility:** Low latency makes AI accessible to users with limited bandwidth or slow connections.\n\n**3. Scalability and Performance:**\n\n* **High Throughput:** Low latency LLMs can handle a high volume of requests simultaneously, making them suitable for large-scale applications.\n* **Resource Efficiency:** Lower latency often translates to less computational power needed, leading to cost savings and efficient use of resources.\n* **Improved Accuracy:** Lower latency allows for faster iteration and training cycles, leading to more accurate and reliable models.\n\n**4. Emerging Applications:**\n\n* **Autonomous Vehicles:** Real-time decision-making is crucial for safe and effective autonomous driving, requiring low latency for processing environmental information.\n* **Medical Diagnosis:** Low latency AI can assist medical professionals in diagnosing conditions faster and more accurately, saving lives.\n* **Financial Trading:** Low latency is essential for algorithms that analyze market data and execute trades in fractions of a second.\n\n**Overall, low latency LLMs are crucial for the future of AI, enabling faster, more responsive, and efficient interactions across various fields. As technology advances, we can expect even lower latency models, unlocking new possibilities and revolutionizing the way we interact with the world around us.**\n"] \ No newline at end of file diff --git a/src/config/config.js b/src/config/config.js index 0dbd9b5..a727f0f 100644 --- a/src/config/config.js +++ b/src/config/config.js @@ -11,11 +11,14 @@ module.exports = { geminiApiKey: process.env.GEMINI_API_KEY, llamaURL: process.env.LLAMACPP_URL, anthropicApiKey: process.env.ANTHROPIC_API_KEY, - rekaApiKey: process.env.REKA_API_KEY, - gooseApiKey: process.env.GOOSE_API_KEY, + rekaaiApiKey: process.env.REKAAI_API_KEY, + gooseaiApiKey: process.env.GOOSEAI_API_KEY, cohereApiKey: process.env.COHERE_API_KEY, - mistralApiKey: process.env.MISTRAL_API_KEY, + mistralaiApiKey: process.env.MISTRALAI_API_KEY, huggingfaceApiKey: process.env.HUGGINGFACE_API_KEY, perplexityApiKey: process.env.PERPLEXITY_API_KEY, ai21ApiKey: process.env.AI21_API_KEY, + cloudflareaiApiKey: process.env.CLOUDFLARE_API_KEY, + cloudflareaiAccountId: process.env.CLOUDFLARE_ACCOUNT_ID, + fireworksaiApiKey: process.env.FIREWORKSAI_API_KEY, }; diff --git a/src/config/llmProviders.json b/src/config/llmProviders.json index 0470d99..6e31754 100644 --- a/src/config/llmProviders.json +++ b/src/config/llmProviders.json @@ -48,7 +48,7 @@ "small": { "name": "gemini-small" } } }, - "goose": { + "gooseai": { "url": "https://api.goose.ai/v1/engines", "model": { "default": { "name": "gpt-neo-20b", "tokens": 2048 }, @@ -83,12 +83,12 @@ "llamacpp": { "url": "http://localhost:8080/completion" }, - "mistral": { + "mistralai": { "url": "https://api.mistral.ai/v1/chat/completions", "model": { "default": { "name": "mistral-large-latest", "tokens": 32768 }, "large": { "name": "mistral-large-latest", "tokens": 32768 }, - "small": { "name": "mistral-small", "tokens": 32768 } + "small": { "name": "mistralai-small", "tokens": 32768 } } }, "perplexity": { @@ -99,12 +99,49 @@ "small": { "name": "llama-3-sonar-small-32k-online", "tokens": 28000 } } }, - "reka": { + "rekaai": { "url": "https://api.reka.ai/v1/chat", "model": { "default": { "name": "reka-core" }, "large": { "name": "reka-core" }, "small": { "name": "reka-edge" } } + }, + "cloudflareai": { + "url": "https://api.cloudflare.com/client/v4/accounts", + "model": { + "default": { "name": "@cf/meta/llama-3-8b-instruct", "tokens": 4096 }, + "large": { "name": "@hf/thebloke/llama-2-13b-chat-awq", "tokens": 8192 }, + "small": { + "name": "@cf/tinyllama/tinyllama-1.1b-chat-v1.0", + "tokens": 2048 + } + }, + "note": "url value is partial" + }, + "fireworksai": { + "url": "https://api.fireworks.ai/inference/v1/chat/completions", + "model": { + "default": { + "name": "accounts/fireworks/models/llama-v3-8b-instruct", + "tokens": 8192 + }, + "large": { + "name": "accounts/fireworks/models/llama-v3-70b-instruct", + "tokens": 8192 + }, + "small": { + "name": "accounts/fireworks/models/phi-3-mini-128k-instruct", + "tokens": 128000 + } + } + }, + "friendli": { + "url": "https://inference.friendli.ai/v1/chat/completions", + "model": { + "default": { "name": "mixtral-8x7b-instruct-v0-1", "tokens": 32768 }, + "large": { "name": "meta-llama-3-70b-instruct", "tokens": 8192 }, + "small": { "name": "mistralai-7b-instruct-v0-2", "tokens": 4096 } + } } } diff --git a/src/index.js b/src/index.js index 3dd037f..0b873e2 100644 --- a/src/index.js +++ b/src/index.js @@ -4,18 +4,25 @@ */ const modules = { - openai: './interfaces/openai', + ai21: './interfaces/ai21', anthropic: './interfaces/anthropic', + azureai: './interfaces/azureai', + cloudflareai: './interfaces/cloudflareai', + cohere: './interfaces/cohere', + fireworksai: './interfaces/fireworksai', + friendliai: './interfaces/friendliai', gemini: './interfaces/gemini', - llamacpp: './interfaces/llamacpp', - reka: './interfaces/reka', + gooseai: './interfaces/gooseai', groq: './interfaces/groq', - goose: './interfaces/goose', - cohere: './interfaces/cohere', - mistral: './interfaces/mistral', huggingface: './interfaces/huggingface', - ai21: './interfaces/ai21', + llamacpp: './interfaces/llamacpp', + mistralai: './interfaces/mistralai', + openai: './interfaces/openai', perplexity: './interfaces/perplexity', + rekaai: './interfaces/rekaai', + taskingai: './interfaces/taskingai', + telnyx: './interfaces/telnyx', + togetherai: './interfaces/togetherai', }; const LLMInterface = {}; @@ -32,8 +39,6 @@ Object.keys(modules).forEach((key) => { }); }); -const handlers = LLMInterface; // alias to keep backward compatibility - const LLMInstances = {}; // Persistent LLM instances /** @@ -41,7 +46,7 @@ const LLMInstances = {}; // Persistent LLM instances * Reuses existing LLM instances for the given module and API key to optimize resource usage. * * @param {string} module - The name of the LLM module (e.g., "openai"). - * @param {string} apiKey - The API key for the LLM. + * @param {string|array} apiKey - The API key for the LLM or an array containing the API key and user ID. * @param {string} message - The message to send to the LLM. * @param {object} [options={}] - Additional options for the message. * @param {object} [interfaceOptions={}] - Options for initializing the interface. @@ -56,32 +61,35 @@ async function LLMInterfaceSendMessage( interfaceOptions = {}, ) { if (!LLMInterface[module]) { - throw new Error(`Module ${module} is not supported.`); + throw new Error(`Unsupported LLM module: ${module}`); } if (!apiKey) { - throw new Error(`API key for ${module} is not provided.`); + throw new Error(`Missing API key for LLM module: ${module}`); } - if (!LLMInstances[module]) { - LLMInstances[module] = {}; + let userId; + if (Array.isArray(apiKey)) { + [apiKey, userId] = apiKey; } + LLMInstances[module] = LLMInstances[module] || {}; + if (!LLMInstances[module][apiKey]) { - LLMInstances[module][apiKey] = new LLMInterface[module](apiKey); + LLMInstances[module][apiKey] = userId + ? new LLMInterface[module](apiKey, userId) + : new LLMInterface[module](apiKey); } const llmInstance = LLMInstances[module][apiKey]; + try { - const response = await llmInstance.sendMessage( - message, - options, - interfaceOptions, - ); - return response; + return await llmInstance.sendMessage(message, options, interfaceOptions); } catch (error) { - throw new Error(`LLMInterfaceSendMessage: ${error}`); + throw new Error( + `Failed to send message using LLM module ${module}: ${error.message}`, + ); } } -module.exports = { LLMInterface, LLMInterfaceSendMessage, handlers }; +module.exports = { LLMInterface, LLMInterfaceSendMessage }; diff --git a/src/interfaces/ai21.js b/src/interfaces/ai21.js index 669c37d..2713b1f 100644 --- a/src/interfaces/ai21.js +++ b/src/interfaces/ai21.js @@ -6,12 +6,13 @@ */ const axios = require('axios'); -const { getFromCache, saveToCache } = require('../utils/cache'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); const { returnSimpleMessageObject, returnModelByAlias, -} = require('../utils/utils'); -const { ai21ApiKey } = require('../config/config'); +} = require('../utils/utils.js'); +const { ai21ApiKey } = require('../config/config.js'); const config = require('../config/llmProviders.json'); const log = require('loglevel'); @@ -105,6 +106,17 @@ class AI21 { responseContent = response.data.choices[0].message.content; } + // Attempt to repair the object if needed + if (interfaceOptions.attemptJsonRepair) { + responseContent = await parseJSON( + responseContent, + interfaceOptions.attemptJsonRepair, + ); + } + + // Build response object + responseContent = { results: responseContent }; + // Cache the response content if cache timeout is set if (cacheTimeoutSeconds && responseContent) { saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); @@ -135,4 +147,6 @@ class AI21 { } } +AI21.prototype.adjustModelAlias = adjustModelAlias; + module.exports = AI21; diff --git a/src/interfaces/anthropic.js b/src/interfaces/anthropic.js index 07caa9e..d23823b 100644 --- a/src/interfaces/anthropic.js +++ b/src/interfaces/anthropic.js @@ -6,12 +6,13 @@ */ const AnthropicSDK = require('@anthropic-ai/sdk'); -const { getFromCache, saveToCache } = require('../utils/cache'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); const { returnSimpleMessageObject, returnModelByAlias, -} = require('../utils/utils'); -const { anthropicApiKey } = require('../config/config'); +} = require('../utils/utils.js'); +const { anthropicApiKey } = require('../config/config.js'); const config = require('../config/llmProviders.json'); const log = require('loglevel'); @@ -104,6 +105,16 @@ class Anthropic { responseContent = response.content[0].text; } + // Attempt to repair the object if needed + if (interfaceOptions.attemptJsonRepair) { + responseContent = await parseJSON( + responseContent, + interfaceOptions.attemptJsonRepair, + ); + } + // Build response object + responseContent = { results: responseContent }; + // Cache the response content if cache timeout is set if (cacheTimeoutSeconds && responseContent) { saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); @@ -133,5 +144,6 @@ class Anthropic { } } } +Anthropic.prototype.adjustModelAlias = adjustModelAlias; module.exports = Anthropic; diff --git a/src/interfaces/azureai.js b/src/interfaces/azureai.js index 38c90aa..98c28c0 100644 --- a/src/interfaces/azureai.js +++ b/src/interfaces/azureai.js @@ -1,39 +1,44 @@ /** - * @file src/interfaces/anthropic.js - * @class Anthropic - * @description Wrapper class for the Anthropic API. - * @param {string} apiKey - The API key for the Anthropic API. + * @file src/interfaces/azureai.js + * @class AzureAI + * @description Wrapper class for the AzureAI API. + * @param {string} apiKey - The API key for the AzureAI API. */ - -const AnthropicSDK = require('@anthropic-ai/sdk'); -const { getFromCache, saveToCache } = require('../utils/cache'); +const axios = require('axios'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); const { returnSimpleMessageObject, returnModelByAlias, -} = require('../utils/utils'); -const { anthropicApiKey } = require('../config/config'); +} = require('../utils/utils.js'); +const { azureOpenAIApiKey } = require('../config/config.js'); const config = require('../config/llmProviders.json'); const log = require('loglevel'); -// Anthropic class for interacting with the Anthropic API -class Anthropic { +// AzureAI class for interacting with the Azure OpenAI API +class AzureAI { /** - * Constructor for the Anthropic class. - * @param {string} apiKey - The API key for the Anthropic API. + * Constructor for the AzureAI class. + * @param {string} apiKey - The API key for the Azure OpenAI API. */ constructor(apiKey) { - this.interfaceName = 'anthropic'; - this.anthropic = new AnthropicSDK({ - apiKey: apiKey || anthropicApiKey, + this.interfaceName = 'azureai'; + this.apiKey = apiKey || azureOpenAIApiKey; + this.client = axios.create({ + baseURL: 'https://api.openai.azure.com', // Azure OpenAI API base URL + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${this.apiKey}`, + }, }); } /** - * Send a message to the Anthropic API. + * Send a message to the Azure OpenAI API. * @param {string|object} message - The message to send or a message object. * @param {object} options - Additional options for the API request. * @param {object} interfaceOptions - Options specific to the interface. - * @returns {string} The response content from the Anthropic API. + * @returns {string} The response content from the Azure OpenAI API. */ async sendMessage(message, options = {}, interfaceOptions = {}) { // Convert a string message to a simple message object @@ -41,6 +46,7 @@ class Anthropic { typeof message === 'string' ? returnSimpleMessageObject(message) : message; + // Get the cache timeout value from interfaceOptions const cacheTimeoutSeconds = typeof interfaceOptions === 'number' @@ -49,38 +55,37 @@ class Anthropic { // Extract model and messages from the message object const { model, messages } = messageObject; + // Get the selected model based on alias or default const selectedModel = returnModelByAlias(this.interfaceName, model); - // Set default value for max_tokens - const { max_tokens = 150 } = options; - - // Convert messages to the format expected by the Anthropic API - const convertedMessages = messages.map((msg, index) => { - if (index === 0) { - return { ...msg, role: 'user' }; - // If this is the first message, set the role to "user" - } else if (msg.role === 'system') { - return { ...msg, role: 'assistant' }; - // If the message role is "system", set it to "assistant" - } else { - return { ...msg, role: index % 2 === 0 ? 'user' : 'assistant' }; - // Otherwise, alternate between "user" and "assistant" roles - } - }); - // Prepare the parameters for the API call - const params = { + // Set default values for temperature, max_tokens, and stop_sequences + const { + temperature = 0.7, + max_tokens = 150, + stop_sequences = ['<|endoftext|>'], + response_format = '', + } = options; + + // Prepare the request body for the API call + const requestBody = { model: selectedModel || options.model || config[this.interfaceName].model.default.name, - messages: convertedMessages, + messages, max_tokens, ...options, }; - // Generate a cache key based on the parameters - const cacheKey = JSON.stringify(params); + // Add response_format if specified + if (response_format) { + requestBody.response_format = { type: response_format }; + } + + // Generate a cache key based on the request body + const cacheKey = JSON.stringify(requestBody); + // Check if a cached response exists for the request if (cacheTimeoutSeconds) { const cachedResponse = getFromCache(cacheKey); @@ -92,21 +97,35 @@ class Anthropic { // Set up retry mechanism with exponential backoff let retryAttempts = interfaceOptions.retryAttempts || 0; let currentRetry = 0; + while (retryAttempts >= 0) { try { - // Send the request to the Anthropic API - const response = await this.anthropic.messages.create(params); + // Send the request to the Azure OpenAI API + const response = await this.client.post( + '?api-version=' + selectedModel, + requestBody, + ); + // Extract the response content from the API response let responseContent = null; if ( response && - response.content && - response.content[0] && - response.content[0].text + response.data && + response.data.results && + response.data.results[0] && + response.data.results[0].generatedText ) { - responseContent = response.content[0].text; + responseContent = response.data.results[0].generatedText; } + // Attempt to repair the object if needed + if (interfaceOptions.attemptJsonRepair) { + responseContent = JSON.parse(responseContent); + } + + // Build response object + responseContent = { results: responseContent }; + // Cache the response content if cache timeout is set if (cacheTimeoutSeconds && responseContent) { saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); @@ -138,4 +157,7 @@ class Anthropic { } } -module.exports = Anthropic; +// Adjust model alias for backwards compatibility +AzureAI.prototype.adjustModelAlias = adjustModelAlias; + +module.exports = AzureAI; diff --git a/src/interfaces/cloudflareai.js b/src/interfaces/cloudflareai.js new file mode 100644 index 0000000..12bfda6 --- /dev/null +++ b/src/interfaces/cloudflareai.js @@ -0,0 +1,172 @@ +/** + * @file src/interfaces/cloudflareai.js + * @class CloudflareAI + * @description Wrapper class for the CloudflareAI API. + * @param {string} apiKey - The API key for the CloudflareAI API. + */ + +const axios = require('axios'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); +const { + returnSimpleMessageObject, + returnModelByAlias, +} = require('../utils/utils.js'); +const { + cloudflareaiApiKey, + cloudflareaiAccountId, +} = require('../config/config.js'); +const config = require('../config/llmProviders.json'); +const log = require('loglevel'); + +// CloudflareAI class for interacting with the CloudflareAI LLM API +class CloudflareAI { + /** + * Constructor for the CloudflareAI class. + * @param {string} apiKey - The API key for the CloudflareAI LLM API. + */ + constructor(apiKey, accountId) { + this.interfaceName = 'cloudflareai'; + + this.apiKey = apiKey || cloudflareaiApiKey; + this.accountId = accountId || cloudflareaiAccountId; + this.client = axios.create({ + baseURL: config[this.interfaceName].url, + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${this.apiKey}`, + }, + }); + } + + /** + * Send a message to the CloudflareAI LLM API. + * @param {string|object} message - The message to send or a message object. + * @param {object} options - Additional options for the API request. + * @param {object} interfaceOptions - Options specific to the interface. + * @returns {string} The response content from the CloudflareAI LLM API. + */ + async sendMessage(message, options = {}, interfaceOptions = {}) { + // Convert a string message to a simple message object + const messageObject = + typeof message === 'string' + ? returnSimpleMessageObject(message) + : message; + + // Get the cache timeout value from interfaceOptions + const cacheTimeoutSeconds = + typeof interfaceOptions === 'number' + ? interfaceOptions + : interfaceOptions.cacheTimeoutSeconds; + + // Extract model, lora, and messages from the message object + const { model, lora, messages } = messageObject; + + // Get the selected model based on alias or default + let selectedModel = returnModelByAlias(this.interfaceName, model); + + // Set default values for temperature, max_tokens, stop_sequences, frequency_penalty, and presence_penalty + const { + temperature = 0.7, + max_tokens = 150, + stop_sequences = ['<|endoftext|>'], + frequency_penalty = 0, + presence_penalty = 0, + } = options; + + const account_id = interfaceOptions.account_id || this.accountId; + + // Update selected model + selectedModel = + selectedModel || + options.model || + config[this.interfaceName].model.default.name; + + // Prepare the request body for the API call + const requestBody = { + messages, + max_tokens, + ...options, + }; + + // Append the model name to the cache key + let cacheKeyFromRequestBody = requestBody; + cacheKeyFromRequestBody.model = selectedModel; + + // Generate a cache key based on cacheKeyFromRequestBody + const cacheKey = JSON.stringify(cacheKeyFromRequestBody); + + // Check if a cached response exists for the request + if (cacheTimeoutSeconds) { + const cachedResponse = getFromCache(cacheKey); + if (cachedResponse) { + return cachedResponse; + } + } + + // Set up retry mechanism with exponential backoff + let retryAttempts = interfaceOptions.retryAttempts || 0; + let currentRetry = 0; + + while (retryAttempts >= 0) { + try { + // Send the request to the CloudflareAI LLM API + const response = await this.client.post( + `/${account_id}/ai/run/${selectedModel}`, + requestBody, + ); + + // Extract the response content from the API response + let responseContent = null; + if ( + response && + response.data && + response.data.result && + response.data.result.response + ) { + responseContent = response.data.result.response; + } + + // Attempt to repair the object if needed + if (interfaceOptions.attemptJsonRepair) { + responseContent = JSON.parse(responseContent); + } + + // Build response object + responseContent = { results: responseContent }; + + // Cache the response content if cache timeout is set + if (cacheTimeoutSeconds && responseContent) { + saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); + } + + // Return the response content + return responseContent; + } catch (error) { + // Decrease the number of retry attempts + retryAttempts--; + if (retryAttempts < 0) { + // Log any errors and throw the error + log.error( + 'Response data:', + error.response ? error.response.data : null, + ); + throw error; + } + + // Calculate the delay for the next retry attempt + let retryMultiplier = interfaceOptions.retryMultiplier || 0.3; + const delay = (currentRetry + 1) * retryMultiplier * 1000; + + // Wait for the specified delay before retrying + await new Promise((resolve) => setTimeout(resolve, delay)); + currentRetry++; + } + } + } +} + +// Adjust model alias for backwards compatibility +CloudflareAI.prototype.adjustModelAlias = adjustModelAlias; + +module.exports = CloudflareAI; diff --git a/src/interfaces/cohere.js b/src/interfaces/cohere.js index c30ab73..46cd8ec 100644 --- a/src/interfaces/cohere.js +++ b/src/interfaces/cohere.js @@ -6,12 +6,13 @@ */ const axios = require('axios'); -const { getFromCache, saveToCache } = require('../utils/cache'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); const { returnSimpleMessageObject, returnModelByAlias, -} = require('../utils/utils'); -const { cohereApiKey } = require('../config/config'); +} = require('../utils/utils.js'); +const { cohereApiKey } = require('../config/config.js'); const config = require('../config/llmProviders.json'); const log = require('loglevel'); @@ -136,6 +137,15 @@ class Cohere { if (response && response.data && response.data.text) { responseContent = response.data.text; } + // Attempt to repair the object if needed + if (interfaceOptions.attemptJsonRepair) { + responseContent = await parseJSON( + responseContent, + interfaceOptions.attemptJsonRepair, + ); + } + // Build response object + responseContent = { results: responseContent }; if (cacheTimeoutSeconds && responseContent) { saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); @@ -163,4 +173,6 @@ class Cohere { } } +Cohere.prototype.adjustModelAlias = adjustModelAlias; + module.exports = Cohere; diff --git a/src/interfaces/fireworksai.js b/src/interfaces/fireworksai.js new file mode 100644 index 0000000..362561c --- /dev/null +++ b/src/interfaces/fireworksai.js @@ -0,0 +1,167 @@ +/** + * @file src/interfaces/fireworksai.js + * @class FireworksAI + * @description Wrapper class for the Cohere API. + * @param {string} apiKey - The API key for the Cohere API. + */ + +const axios = require('axios'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); +const { + returnSimpleMessageObject, + returnModelByAlias, + parseJSON, +} = require('../utils/utils.js'); +const { fireworksaiApiKey } = require('../config/config.js'); +const config = require('../config/llmProviders.json'); +const log = require('loglevel'); + +// FireworksAI class for interacting with the FireworksAI AI API +class FireworksAI { + /** + * Constructor for the FireworksAI class. + * @param {string} apiKey - The API key for the FireworksAI AI API. + */ + constructor(apiKey) { + this.interfaceName = 'fireworksai'; + this.apiKey = apiKey || fireworksaiApiKey; + this.client = axios.create({ + baseURL: config[this.interfaceName].url, + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${this.apiKey}`, + }, + }); + } + + /** + * Send a message to the FireworksAI AI API. + * @param {string|object} message - The message to send or a message object. + * @param {object} options - Additional options for the API request. + * @param {object} interfaceOptions - Options specific to the interface. + * @returns {string} The response content from the FireworksAI AI API. + */ + async sendMessage(message, options = {}, interfaceOptions = {}) { + // Convert a string message to a simple message object + const messageObject = + typeof message === 'string' + ? returnSimpleMessageObject(message) + : message; + + // Get the cache timeout value from interfaceOptions + const cacheTimeoutSeconds = + typeof interfaceOptions === 'number' + ? interfaceOptions + : interfaceOptions.cacheTimeoutSeconds; + + // Extract model and messages from the message object + const { model, messages } = messageObject; + + // Get the selected model based on alias or default + const selectedModel = returnModelByAlias(this.interfaceName, model); + + // Set default values for max_tokens and stop_sequences + const { + max_tokens = 150, + stop_sequences = ['<|endoftext|>'], + response_format = '', + } = options; + + // Prepare the request body for the API call + const requestBody = { + model: + selectedModel || + options.model || + config[this.interfaceName].model.default.name, + messages, + max_tokens, + ...options, + }; + + // Add response_format if specified + if (response_format) { + requestBody.response_format = { type: response_format }; + } + + // Generate a cache key based on the request body + const cacheKey = JSON.stringify(requestBody); + + // Check if a cached response exists for the request + if (cacheTimeoutSeconds) { + const cachedResponse = getFromCache(cacheKey); + if (cachedResponse) { + return cachedResponse; + } + } + + // Set up retry mechanism with exponential backoff + let retryAttempts = interfaceOptions.retryAttempts || 0; + let currentRetry = 0; + + while (retryAttempts >= 0) { + try { + // Send the request to the FireworksAI AI API + const response = await this.client.post('', requestBody); + + // Extract the response content from the API response + let responseContent = null; + if ( + response && + response.data && + response.data.choices && + response.data.choices[0] && + response.data.choices[0].message + ) { + responseContent = response.data.choices[0].message.content; + } + + // Attempt to repair the object if needed + if ( + response_format === 'json_object' && + interfaceOptions.attemptJsonRepair + ) { + responseContent = await parseJSON( + responseContent, + interfaceOptions.attemptJsonRepair, + ); + } + + // Build response object + responseContent = { results: responseContent }; + + // Cache the response content if cache timeout is set + if (cacheTimeoutSeconds && responseContent) { + saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); + } + + // Return the response content + return responseContent; + } catch (error) { + // Decrease the number of retry attempts + retryAttempts--; + if (retryAttempts < 0) { + // Log any errors and throw the error + log.error( + 'Response data:', + error.response ? error.response.data : null, + ); + throw error; + } + + // Calculate the delay for the next retry attempt + let retryMultiplier = interfaceOptions.retryMultiplier || 0.3; + const delay = (currentRetry + 1) * retryMultiplier * 1000; + + // Wait for the specified delay before retrying + await new Promise((resolve) => setTimeout(resolve, delay)); + currentRetry++; + } + } + } +} + +// Adjust model alias for backwards compatibility +FireworksAI.prototype.adjustModelAlias = adjustModelAlias; + +module.exports = FireworksAI; diff --git a/src/interfaces/friendliai.js b/src/interfaces/friendliai.js new file mode 100644 index 0000000..6f70c1e --- /dev/null +++ b/src/interfaces/friendliai.js @@ -0,0 +1,168 @@ +/** + * @file src/interfaces/friendli.js + * @class Friendli + * @description Wrapper class for the Friendli API. + * @param {string} apiKey - The API key for the Friendli API. + */ + +const axios = require('axios'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); +const { + returnSimpleMessageObject, + returnModelByAlias, + parseJSON, +} = require('../utils/utils.js'); +const { friendliApiKey } = require('../config/config.js'); +const config = require('../config/llmProviders.json'); +const log = require('loglevel'); + +// FriendliAI class for interacting with the Friendly AI API +class FriendliAI { + /** + * Constructor for the FriendlyAI class. + * @param {string} apiKey - The API key for the Friendly AI API. + */ + constructor(apiKey) { + this.interfaceName = 'friendli'; + this.apiKey = apiKey || friendliApiKey; + this.client = axios.create({ + baseURL: 'https://inference.friendli.ai/v1/chat/completions', // Friendli AI API base URL + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${this.apiKey}`, + }, + }); + } + + /** + * Send a message to the Friendly AI API. + * @param {string|object} message - The message to send or a message object. + * @param {object} options - Additional options for the API request. + * @param {object} interfaceOptions - Options specific to the interface. + * @returns {string} The response content from the Friendly AI API. + */ + async sendMessage(message, options = {}, interfaceOptions = {}) { + // Convert a string message to a simple message object + const messageObject = + typeof message === 'string' + ? returnSimpleMessageObject(message) + : message; + + // Get the cache timeout value from interfaceOptions + const cacheTimeoutSeconds = + typeof interfaceOptions === 'number' + ? interfaceOptions + : interfaceOptions.cacheTimeoutSeconds; + + // Extract model and messages from the message object + const { model, messages } = messageObject; + + // Get the selected model based on alias or default + const selectedModel = returnModelByAlias(this.interfaceName, model); + + // Set default values for temperature, max_tokens, and stop_sequences + const { + temperature = 0.7, + max_tokens = 150, + stop_sequences = ['<|endoftext|>'], + response_format = '', + } = options; + + // Prepare the request body for the API call + const requestBody = { + model: + selectedModel || + options.model || + config[this.interfaceName].model.default.name, + messages, + max_tokens, + ...options, + }; + + // Add response_format if specified + if (response_format) { + requestBody.response_format = { type: response_format }; + } + + // Generate a cache key based on the request body + const cacheKey = JSON.stringify(requestBody); + + // Check if a cached response exists for the request + if (cacheTimeoutSeconds) { + const cachedResponse = getFromCache(cacheKey); + if (cachedResponse) { + return cachedResponse; + } + } + + // Set up retry mechanism with exponential backoff + let retryAttempts = interfaceOptions.retryAttempts || 0; + let currentRetry = 0; + + while (retryAttempts >= 0) { + try { + // Send the request to the Friendly AI API + const response = await this.client.post('', requestBody); + + // Extract the response content from the API response + let responseContent = null; + if ( + response && + response.data && + response.data.results && + response.data.results[0] && + response.data.results[0].generatedText + ) { + responseContent = response.data.results[0].generatedText; + } + + // Attempt to repair the object if needed + if ( + response_format === 'json_object' && + interfaceOptions.attemptJsonRepair + ) { + responseContent = await parseJSON( + responseContent, + interfaceOptions.attemptJsonRepair, + ); + } + + // Build response object + responseContent = { results: responseContent }; + + // Cache the response content if cache timeout is set + if (cacheTimeoutSeconds && responseContent) { + saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); + } + + // Return the response content + return responseContent; + } catch (error) { + // Decrease the number of retry attempts + retryAttempts--; + if (retryAttempts < 0) { + // Log any errors and throw the error + log.error( + 'Response data:', + error.response ? error.response.data : null, + ); + throw error; + } + + // Calculate the delay for the next retry attempt + let retryMultiplier = interfaceOptions.retryMultiplier || 0.3; + const delay = (currentRetry + 1) * retryMultiplier * 1000; + + // Wait for the specified delay before retrying + await new Promise((resolve) => setTimeout(resolve, delay)); + currentRetry++; + } + } + } +} + +// Adjust model alias for backwards compatibility +FriendliAI.prototype.adjustModelAlias = adjustModelAlias; + +module.exports = FriendliAI; diff --git a/src/interfaces/gemini.js b/src/interfaces/gemini.js index 7638022..e9578d7 100644 --- a/src/interfaces/gemini.js +++ b/src/interfaces/gemini.js @@ -6,9 +6,14 @@ */ const { GoogleGenerativeAI } = require('@google/generative-ai'); -const { getFromCache, saveToCache } = require('../utils/cache'); -const { returnMessageObject, returnModelByAlias } = require('../utils/utils'); -const { geminiApiKey } = require('../config/config'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); +const { + returnMessageObject, + returnModelByAlias, + parseJSON, +} = require('../utils/utils.js'); +const { geminiApiKey } = require('../config/config.js'); const config = require('../config/llmProviders.json'); const log = require('loglevel'); @@ -121,19 +126,17 @@ class Gemini { let text = await response.text(); if (response_format === 'json_object') { - try { - // Parse the response as JSON if requested - text = JSON.parse(text); - } catch (e) { - text = null; - } + text = await parseJSON(text, interfaceOptions.attemptJsonRepair); } - if (cacheTimeoutSeconds && text) { - saveToCache(cacheKey, text, cacheTimeoutSeconds); + // Build response object + const responseContent = { results: text }; + + if (cacheTimeoutSeconds && responseContent) { + saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); } - return text; + return responseContent; } catch (error) { retryAttempts--; if (retryAttempts < 0) { @@ -155,4 +158,6 @@ class Gemini { } } +Gemini.prototype.adjustModelAlias = adjustModelAlias; + module.exports = Gemini; diff --git a/src/interfaces/goose.js b/src/interfaces/gooseai.js similarity index 65% rename from src/interfaces/goose.js rename to src/interfaces/gooseai.js index 4458b04..d15dc9e 100644 --- a/src/interfaces/goose.js +++ b/src/interfaces/gooseai.js @@ -1,26 +1,30 @@ /** - * @file src/interfaces/goose.js - * @class Goose - * @description Wrapper class for the Goose API. - * @param {string} apiKey - The API key for the Goose API. + * @file src/interfaces/gooseai.js + * @class GooseAI + * @description Wrapper class for the GooseAI API. + * @param {string} apiKey - The API key for the GooseAI API. */ const axios = require('axios'); -const { getFromCache, saveToCache } = require('../utils/cache'); -const { returnMessageObject, returnModelByAlias } = require('../utils/utils'); -const { gooseApiKey } = require('../config/config'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); +const { + returnMessageObject, + returnModelByAlias, +} = require('../utils/utils.js'); +const { gooseaiApiKey } = require('../config/config.js'); const config = require('../config/llmProviders.json'); const log = require('loglevel'); -// Goose class for interacting with the Goose API -class Goose { +// GooseAI class for interacting with the GooseAI API +class GooseAI { /** - * Constructor for the Goose class. - * @param {string} apiKey - The API key for the Goose API. + * Constructor for the GooseAI class. + * @param {string} apiKey - The API key for the GooseAI API. */ constructor(apiKey) { - this.interfaceName = 'goose'; - this.apiKey = apiKey || gooseApiKey; + this.interfaceName = 'gooseai'; + this.apiKey = apiKey || gooseaiApiKey; this.client = axios.create({ baseURL: config[this.interfaceName].url, headers: { @@ -31,11 +35,11 @@ class Goose { } /** - * Send a message to the Goose API. + * Send a message to the GooseAI API. * @param {string|object} message - The message to send or a message object. * @param {object} options - Additional options for the API request. * @param {object} interfaceOptions - Options specific to the interface. - * @returns {string} The response content from the Goose API. + * @returns {string} The response content from the GooseAI API. */ async sendMessage(message, options = {}, interfaceOptions = {}) { const messageObject = @@ -83,10 +87,10 @@ class Goose { let currentRetry = 0; while (retryAttempts >= 0) { try { - // Send the request to the Goose API + // Send the request to the GooseAI API const url = `/${model}/completions`; const response = await this.client.post(url, payload); - let responseText = null; + let responseContent = null; if ( response && response.data && @@ -94,14 +98,23 @@ class Goose { response.data.choices[0] && response.data.choices[0].text ) { - responseText = response.data.choices[0].text.trim(); + responseContent = response.data.choices[0].text.trim(); } + // Attempt to repair the object if needed + if (interfaceOptions.attemptJsonRepair) { + responseContent = await parseJSON( + responseContent, + interfaceOptions.attemptJsonRepair, + ); + } + // Build response object + responseContent = { results: responseContent }; - if (cacheTimeoutSeconds && responseText) { - saveToCache(cacheKey, responseText, cacheTimeoutSeconds); + if (cacheTimeoutSeconds && responseContent) { + saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); } - return responseText; + return responseContent; } catch (error) { retryAttempts--; if (retryAttempts < 0) { @@ -123,4 +136,6 @@ class Goose { } } -module.exports = Goose; +GooseAI.prototype.adjustModelAlias = adjustModelAlias; + +module.exports = GooseAI; diff --git a/src/interfaces/groq.js b/src/interfaces/groq.js index 90f1691..84bc896 100644 --- a/src/interfaces/groq.js +++ b/src/interfaces/groq.js @@ -6,9 +6,13 @@ */ const GroqSDK = require('groq-sdk'); -const { getFromCache, saveToCache } = require('../utils/cache'); -const { returnMessageObject, returnModelByAlias } = require('../utils/utils'); -const { groqApiKey } = require('../config/config'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); +const { + returnMessageObject, + returnModelByAlias, +} = require('../utils/utils.js'); +const { groqApiKey } = require('../config/config.js'); const config = require('../config/llmProviders.json'); const log = require('loglevel'); @@ -84,6 +88,15 @@ class Groq { ) { responseContent = chatCompletion.choices[0].message.content; } + // Attempt to repair the object if needed + if (interfaceOptions.attemptJsonRepair) { + responseContent = await parseJSON( + responseContent, + interfaceOptions.attemptJsonRepair, + ); + } + // Build response object + responseContent = { results: responseContent }; if (cacheTimeoutSeconds && responseContent) { saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); @@ -112,4 +125,5 @@ class Groq { } } +Groq.prototype.adjustModelAlias = adjustModelAlias; module.exports = Groq; diff --git a/src/interfaces/huggingface.js b/src/interfaces/huggingface.js index d98b292..926b7af 100644 --- a/src/interfaces/huggingface.js +++ b/src/interfaces/huggingface.js @@ -6,12 +6,13 @@ */ const axios = require('axios'); -const { getFromCache, saveToCache } = require('../utils/cache'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); const { returnSimpleMessageObject, returnModelByAlias, -} = require('../utils/utils'); -const { huggingfaceApiKey } = require('../config/config'); +} = require('../utils/utils.js'); +const { huggingfaceApiKey } = require('../config/config.js'); const config = require('../config/llmProviders.json'); const log = require('loglevel'); @@ -100,6 +101,15 @@ class HuggingFace { ) { responseContent = response.data[0].generated_text; } + // Attempt to repair the object if needed + if (interfaceOptions.attemptJsonRepair) { + responseContent = await parseJSON( + responseContent, + interfaceOptions.attemptJsonRepair, + ); + } + // Build response object + responseContent = { results: responseContent }; if (cacheTimeoutSeconds && responseContent) { saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); @@ -125,5 +135,5 @@ class HuggingFace { } } } - +HuggingFace.prototype.adjustModelAlias = adjustModelAlias; module.exports = HuggingFace; diff --git a/src/interfaces/llamacpp.js b/src/interfaces/llamacpp.js index 3230a08..c3e3c28 100644 --- a/src/interfaces/llamacpp.js +++ b/src/interfaces/llamacpp.js @@ -5,12 +5,12 @@ * @param {string} llamacppURL - The base URL for the LlamaCPP API. */ +const axios = require('axios'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); const config = require('../config/llmProviders.json'); const log = require('loglevel'); -const axios = require('axios'); -const { getFromCache, saveToCache } = require('../utils/cache'); - // LlamaCPP class for interacting with the LlamaCPP API class LlamaCPP { /** @@ -81,23 +81,32 @@ class LlamaCPP { // Send the request to the LlamaCPP API const response = await this.client.post('', payload); // Extract the response content from the API response - let contents = ''; + let responseContent = ''; if (response.data.content) { - contents = response.data.content; + responseContent = response.data.content; } else if (response.data.results) { // Join the results content if available - contents = response.data.results + responseContent = response.data.results .map((result) => result.content) .join(); } + // Attempt to repair the object if needed + if (interfaceOptions.attemptJsonRepair) { + responseContent = await parseJSON( + responseContent, + interfaceOptions.attemptJsonRepair, + ); + } + // Build response object + responseContent = { results: responseContent }; // Cache the response content if cache timeout is set - if (cacheTimeoutSeconds && contents) { - saveToCache(cacheKey, contents, cacheTimeoutSeconds); + if (cacheTimeoutSeconds && responseContent) { + saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); } // Return the response content - return contents; + return responseContent; } catch (error) { // Decrease the number of retry attempts retryAttempts--; @@ -121,5 +130,5 @@ class LlamaCPP { } } } - +LlamaCPP.prototype.adjustModelAlias = adjustModelAlias; module.exports = LlamaCPP; diff --git a/src/interfaces/mistral.js b/src/interfaces/mistralai.js similarity index 70% rename from src/interfaces/mistral.js rename to src/interfaces/mistralai.js index f4e1687..9e9632b 100644 --- a/src/interfaces/mistral.js +++ b/src/interfaces/mistralai.js @@ -1,26 +1,30 @@ /** - * @file src/interfaces/mistral.js - * @class Mistral - * @description Wrapper class for the Mistral API. - * @param {string} apiKey - The API key for the Mistral API. + * @file src/interfaces/mistralai.js + * @class MistralAI + * @description Wrapper class for the MistralAI API. + * @param {string} apiKey - The API key for the MistralAI API. */ const axios = require('axios'); -const { getFromCache, saveToCache } = require('../utils/cache'); -const { returnMessageObject, returnModelByAlias } = require('../utils/utils'); -const { mistralApiKey } = require('../config/config'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); +const { + returnMessageObject, + returnModelByAlias, +} = require('../utils/utils.js'); +const { mistralaiApiKey } = require('../config/config.js'); const config = require('../config/llmProviders.json'); const log = require('loglevel'); -// Mistral class for interacting with the Mistral API -class Mistral { +// MistralAI class for interacting with the MistralAI API +class MistralAI { /** - * Constructor for the Mistral class. - * @param {string} apiKey - The API key for the Mistral API. + * Constructor for the MistralAI class. + * @param {string} apiKey - The API key for the MistralAI API. */ constructor(apiKey) { - this.interfaceName = 'mistral'; - this.apiKey = apiKey || mistralApiKey; + this.interfaceName = 'mistralai'; + this.apiKey = apiKey || mistralaiApiKey; this.client = axios.create({ baseURL: config[this.interfaceName].url, headers: { @@ -31,11 +35,11 @@ class Mistral { } /** - * Send a message to the Mistral API. + * Send a message to the MistralAI API. * @param {string|object} message - The message to send or a message object. * @param {object} options - Additional options for the API request. * @param {object} interfaceOptions - Options specific to the interface. - * @returns {string} The response content from the Mistral API. + * @returns {string} The response content from the MistralAI API. */ async sendMessage(message, options = {}, interfaceOptions = {}) { const messageObject = @@ -80,8 +84,9 @@ class Mistral { let currentRetry = 0; while (retryAttempts >= 0) { try { - // Send the request to the Mistral API + // Send the request to the MistralAI API const response = await this.client.post('', payload); + let responseContent = null; if ( response && @@ -93,6 +98,15 @@ class Mistral { ) { responseContent = response.data.choices[0].message.content; } + // Attempt to repair the object if needed + if (interfaceOptions.attemptJsonRepair) { + responseContent = await parseJSON( + responseContent, + interfaceOptions.attemptJsonRepair, + ); + } + // Build response object + responseContent = { results: responseContent }; if (cacheTimeoutSeconds && responseContent) { saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); @@ -120,5 +134,5 @@ class Mistral { } } } - -module.exports = Mistral; +MistralAI.prototype.adjustModelAlias = adjustModelAlias; +module.exports = MistralAI; diff --git a/src/interfaces/openai.js b/src/interfaces/openai.js index b80a9a7..6bd740d 100644 --- a/src/interfaces/openai.js +++ b/src/interfaces/openai.js @@ -6,9 +6,14 @@ */ const { OpenAI: OpenAIClient } = require('openai'); -const { getFromCache, saveToCache } = require('../utils/cache'); -const { returnMessageObject, returnModelByAlias } = require('../utils/utils'); -const { openaiApiKey } = require('../config/config'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); +const { + returnMessageObject, + returnModelByAlias, + parseJSON, +} = require('../utils/utils.js'); +const { openaiApiKey } = require('../config/config.js'); const config = require('../config/llmProviders.json'); const log = require('loglevel'); @@ -96,14 +101,15 @@ class OpenAI { } if (response_format === 'json_object') { - try { - // Parse the response as JSON if requested - responseContent = JSON.parse(responseContent); - } catch (e) { - responseContent = null; - } + responseContent = await parseJSON( + responseContent, + interfaceOptions.attemptJsonRepair, + ); } + // Build response object + responseContent = { results: responseContent }; + if (cacheTimeoutSeconds && responseContent) { saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); } @@ -130,5 +136,6 @@ class OpenAI { } } } +OpenAI.prototype.adjustModelAlias = adjustModelAlias; module.exports = OpenAI; diff --git a/src/interfaces/perplexity.js b/src/interfaces/perplexity.js index 66eefa4..a966a42 100644 --- a/src/interfaces/perplexity.js +++ b/src/interfaces/perplexity.js @@ -6,9 +6,13 @@ */ const axios = require('axios'); -const { getFromCache, saveToCache } = require('../utils/cache'); -const { returnMessageObject, returnModelByAlias } = require('../utils/utils'); -const { perplexityApiKey } = require('../config/config'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); +const { + returnMessageObject, + returnModelByAlias, +} = require('../utils/utils.js'); +const { perplexityApiKey } = require('../config/config.js'); const config = require('../config/llmProviders.json'); const log = require('loglevel'); @@ -94,6 +98,15 @@ class Perplexity { ) { responseContent = response.data.choices[0].message.content; } + // Attempt to repair the object if needed + if (interfaceOptions.attemptJsonRepair) { + responseContent = await parseJSON( + responseContent, + interfaceOptions.attemptJsonRepair, + ); + } + // Build response object + responseContent = { results: responseContent }; if (cacheTimeoutSeconds && responseContent) { saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); @@ -121,5 +134,5 @@ class Perplexity { } } } - +Perplexity.prototype.adjustModelAlias = adjustModelAlias; module.exports = Perplexity; diff --git a/src/interfaces/reka.js b/src/interfaces/rekaai.js similarity index 80% rename from src/interfaces/reka.js rename to src/interfaces/rekaai.js index b0dabb2..51e554f 100644 --- a/src/interfaces/reka.js +++ b/src/interfaces/rekaai.js @@ -1,29 +1,30 @@ /** - * @file src/interfaces/reka.js - * @class Reka + * @file src/interfaces/rekaai.js + * @class RekaAI * @description Wrapper class for the Reka AI API. * @param {string} apiKey - The API key for Reka AI. */ const axios = require('axios'); -const { getFromCache, saveToCache } = require('../utils/cache'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); const { returnSimpleMessageObject, returnModelByAlias, -} = require('../utils/utils'); -const { rekaApiKey } = require('../config/config'); +} = require('../utils/utils.js'); +const { rekaaiApiKey } = require('../config/config.js'); const config = require('../config/llmProviders.json'); const log = require('loglevel'); -// Reka class for interacting with the Reka AI API -class Reka { +// RekaAI class for interacting with the Reka AI API +class RekaAI { /** - * Constructor for the Reka class. + * Constructor for the RekaAI class. * @param {string} apiKey - The API key for Reka AI. */ constructor(apiKey) { - this.interfaceName = 'reka'; - this.apiKey = apiKey || rekaApiKey; + this.interfaceName = 'rekaai'; + this.apiKey = apiKey || rekaaiApiKey; this.client = axios.create({ baseURL: config[this.interfaceName].url, headers: { @@ -102,6 +103,15 @@ class Reka { if (response.data?.responses?.[0]?.message?.content) { responseContent = response.data.responses[0].message.content; } + // Attempt to repair the object if needed + if (interfaceOptions.attemptJsonRepair) { + responseContent = await parseJSON( + responseContent, + interfaceOptions.attemptJsonRepair, + ); + } + // Build response object + responseContent = { results: responseContent }; if (cacheTimeoutSeconds && responseContent) { saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); @@ -129,5 +139,5 @@ class Reka { } } } - -module.exports = Reka; +RekaAI.prototype.adjustModelAlias = adjustModelAlias; +module.exports = RekaAI; diff --git a/src/interfaces/taskingai.js b/src/interfaces/taskingai.js new file mode 100644 index 0000000..67b68c7 --- /dev/null +++ b/src/interfaces/taskingai.js @@ -0,0 +1,167 @@ +/** + * @file src/interfaces/taskingai.js + * @class TaskingAI + * @description Wrapper class for the Tasking AI API. + * @param {string} apiKey - The API key for Tasking AI. + */ + +const axios = require('axios'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); +const { + returnSimpleMessageObject, + returnModelByAlias, + parseJSON, +} = require('../utils/utils.js'); +const { taskingAIApiKey } = require('../config/config.js'); +const config = require('../config/llmProviders.json'); +const log = require('loglevel'); + +// TaskingAI class for interacting with the Tasking AI API +class TaskingAI { + /** + * Constructor for the TaskingAI class. + * @param {string} apiKey - The API key for the Tasking AI API. + */ + constructor(apiKey) { + this.interfaceName = 'taskingai'; + this.apiKey = apiKey || taskingAIApiKey; + this.client = axios.create({ + baseURL: 'https://api.tasking.ai/v1/inference/chat_completion', // Tasking AI API base URL + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${this.apiKey}`, + }, + }); + } + + /** + * Send a message to the Tasking AI API. + * @param {string|object} message - The message to send or a message object. + * @param {object} options - Additional options for the API request. + * @param {object} interfaceOptions - Options specific to the interface. + * @returns {string} The response content from the Tasking AI API. + */ + async sendMessage(message, options = {}, interfaceOptions = {}) { + // Convert a string message to a simple message object + const messageObject = + typeof message === 'string' + ? returnSimpleMessageObject(message) + : message; + + // Get the cache timeout value from interfaceOptions + const cacheTimeoutSeconds = + typeof interfaceOptions === 'number' + ? interfaceOptions + : interfaceOptions.cacheTimeoutSeconds; + + // Extract model and messages from the message object + const { model, messages } = messageObject; + + // Get the selected model based on alias or default + const selectedModel = returnModelByAlias(this.interfaceName, model); + + // Set default values for max_tokens and stop_sequences + const { + max_tokens = 150, + stop_sequences = ['<|endoftext|>'], + response_format = '', + } = options; + + // Prepare the request body for the API call + const requestBody = { + model: + selectedModel || + options.model || + config[this.interfaceName].model.default.name, + messages, + max_tokens, + ...options, + }; + + // Add response_format if specified + if (response_format) { + requestBody.response_format = { type: response_format }; + } + + // Generate a cache key based on the request body + const cacheKey = JSON.stringify(requestBody); + + // Check if a cached response exists for the request + if (cacheTimeoutSeconds) { + const cachedResponse = getFromCache(cacheKey); + if (cachedResponse) { + return cachedResponse; + } + } + + // Set up retry mechanism with exponential backoff + let retryAttempts = interfaceOptions.retryAttempts || 0; + let currentRetry = 0; + + while (retryAttempts >= 0) { + try { + // Send the request to the Tasking AI API + const response = await this.client.post('', requestBody); + + // Extract the response content from the API response + let responseContent = null; + if ( + response && + response.data && + response.data.results && + response.data.results[0] && + response.data.results[0].generatedText + ) { + responseContent = response.data.results[0].generatedText; + } + + // Attempt to repair the object if needed + if ( + response_format === 'json_object' && + interfaceOptions.attemptJsonRepair + ) { + responseContent = await parseJSON( + responseContent, + interfaceOptions.attemptJsonRepair, + ); + } + + // Build response object + responseContent = { results: responseContent }; + + // Cache the response content if cache timeout is set + if (cacheTimeoutSeconds && responseContent) { + saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); + } + + // Return the response content + return responseContent; + } catch (error) { + // Decrease the number of retry attempts + retryAttempts--; + if (retryAttempts < 0) { + // Log any errors and throw the error + log.error( + 'Response data:', + error.response ? error.response.data : null, + ); + throw error; + } + + // Calculate the delay for the next retry attempt + let retryMultiplier = interfaceOptions.retryMultiplier || 0.3; + const delay = (currentRetry + 1) * retryMultiplier * 1000; + + // Wait for the specified delay before retrying + await new Promise((resolve) => setTimeout(resolve, delay)); + currentRetry++; + } + } + } +} + +// Adjust model alias for backwards compatibility +TaskingAI.prototype.adjustModelAlias = adjustModelAlias; + +module.exports = TaskingAI; diff --git a/src/interfaces/telnyx.js b/src/interfaces/telnyx.js new file mode 100644 index 0000000..5138615 --- /dev/null +++ b/src/interfaces/telnyx.js @@ -0,0 +1,167 @@ +/** + * @file src/interfaces/telnyx.js + * @class Telnyx + * @description Wrapper class for the Telnyx API. + * @param {string} apiKey - The API key for Telnyx AI. + */ + +const axios = require('axios'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); +const { + returnSimpleMessageObject, + returnModelByAlias, + parseJSON, +} = require('../utils/utils.js'); +const { telnyxApiKey } = require('../config/config.js'); +const config = require('../config/llmProviders.json'); +const log = require('loglevel'); + +// Telnyx class for interacting with the Telnyx AI API +class Telnyx { + /** + * Constructor for the Telnyx class. + * @param {string} apiKey - The API key for the Telnyx AI API. + */ + constructor(apiKey) { + this.interfaceName = 'telnyx'; + this.apiKey = apiKey || telnyxApiKey; + this.client = axios.create({ + baseURL: 'https://api.telnyx.com/v2/ai/chat/completions', // Telnyx AI API base URL + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${this.apiKey}`, + }, + }); + } + + /** + * Send a message to the Telnyx AI API. + * @param {string|object} message - The message to send or a message object. + * @param {object} options - Additional options for the API request. + * @param {object} interfaceOptions - Options specific to the interface. + * @returns {string} The response content from the Telnyx AI API. + */ + async sendMessage(message, options = {}, interfaceOptions = {}) { + // Convert a string message to a simple message object + const messageObject = + typeof message === 'string' + ? returnSimpleMessageObject(message) + : message; + + // Get the cache timeout value from interfaceOptions + const cacheTimeoutSeconds = + typeof interfaceOptions === 'number' + ? interfaceOptions + : interfaceOptions.cacheTimeoutSeconds; + + // Extract model and messages from the message object + const { model, messages } = messageObject; + + // Get the selected model based on alias or default + const selectedModel = returnModelByAlias(this.interfaceName, model); + + // Set default values for max_tokens and stop + const { + max_tokens = 150, + stop = '<|endoftext|>', + response_format = '', + } = options; + + // Prepare the request body for the API call + const requestBody = { + model: + selectedModel || + options.model || + config[this.interfaceName].model.default.name, + messages, + max_tokens, + ...options, + }; + + // Add response_format if specified + if (response_format) { + requestBody.response_format = { type: response_format }; + } + + // Generate a cache key based on the request body + const cacheKey = JSON.stringify(requestBody); + + // Check if a cached response exists for the request + if (cacheTimeoutSeconds) { + const cachedResponse = getFromCache(cacheKey); + if (cachedResponse) { + return cachedResponse; + } + } + + // Set up retry mechanism with exponential backoff + let retryAttempts = interfaceOptions.retryAttempts || 0; + let currentRetry = 0; + + while (retryAttempts >= 0) { + try { + // Send the request to the Telnyx AI API + const response = await this.client.post('', requestBody); + + // Extract the response content from the API response + let responseContent = null; + if ( + response && + response.data && + response.data.results && + response.data.results[0] && + response.data.results[0].generated_text + ) { + responseContent = response.data.results[0].generated_text; + } + + // Attempt to repair the object if needed + if ( + response_format === 'json_object' && + interfaceOptions.attemptJsonRepair + ) { + responseContent = await parseJSON( + responseContent, + interfaceOptions.attemptJsonRepair, + ); + } + + // Build response object + responseContent = { results: responseContent }; + + // Cache the response content if cache timeout is set + if (cacheTimeoutSeconds && responseContent) { + saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); + } + + // Return the response content + return responseContent; + } catch (error) { + // Decrease the number of retry attempts + retryAttempts--; + if (retryAttempts < 0) { + // Log any errors and throw the error + log.error( + 'Response data:', + error.response ? error.response.data : null, + ); + throw error; + } + + // Calculate the delay for the next retry attempt + let retryMultiplier = interfaceOptions.retryMultiplier || 0.3; + const delay = (currentRetry + 1) * retryMultiplier * 1000; + + // Wait for the specified delay before retrying + await new Promise((resolve) => setTimeout(resolve, delay)); + currentRetry++; + } + } + } +} + +// Adjust model alias for backwards compatibility +Telnyx.prototype.adjustModelAlias = adjustModelAlias; + +module.exports = Telnyx; diff --git a/src/interfaces/togetherai.js b/src/interfaces/togetherai.js new file mode 100644 index 0000000..e1e533b --- /dev/null +++ b/src/interfaces/togetherai.js @@ -0,0 +1,155 @@ +/** + * @file src/interfaces/togetherai.js + * @class TogetherAI + * @description Wrapper class for the Together AI API. + * @param {string} apiKey - The API key for Together AI. + */ + +const axios = require('axios'); +const { adjustModelAlias } = require('../utils/adjustModelAlias.js'); +const { getFromCache, saveToCache } = require('../utils/cache.js'); +const { + returnSimpleMessageObject, + returnModelByAlias, +} = require('../utils/utils.js'); +const { togetherAIApiKey } = require('../config/config.js'); +const config = require('../config/llmProviders.json'); +const log = require('loglevel'); + +// TogetherAI class for interacting with the Together AI API +class TogetherAI { + /** + * Constructor for the TogetherAI class. + * @param {string} apiKey - The API key for the Together AI API. + */ + constructor(apiKey) { + this.interfaceName = 'togetherai'; + this.apiKey = apiKey || togetherAIApiKey; + this.client = axios.create({ + baseURL: 'https://api.together.ai', // Together AI API base URL + headers: { + 'Content-Type': 'application/json', + Authorization: `Bearer ${this.apiKey}`, + }, + }); + } + + /** + * Send a message to the Together AI API. + * @param {string|object} message - The message to send or a message object. + * @param {object} options - Additional options for the API request. + * @param {object} interfaceOptions - Options specific to the interface. + * @returns {string} The response content from the Together AI API. + */ + async sendMessage(message, options = {}, interfaceOptions = {}) { + // Convert a string message to a simple message object + const messageObject = + typeof message === 'string' + ? returnSimpleMessageObject(message) + : message; + + // Get the cache timeout value from interfaceOptions + const cacheTimeoutSeconds = + typeof interfaceOptions === 'number' + ? interfaceOptions + : interfaceOptions.cacheTimeoutSeconds; + + // Extract model and messages from the message object + const { model, messages } = messageObject; + + // Get the selected model based on alias or default + const selectedModel = returnModelByAlias(this.interfaceName, model); + + // Set default values for maxTokens and stopSequences + const { maxTokens = 150, stopSequences = ['<|endoftext|>'] } = options; + + // Prepare the request body for the API call + const requestBody = { + model: + selectedModel || + options.model || + config[this.interfaceName].model.default.name, + messages, + maxTokens, + stopSequences, + ...options, + }; + + // Generate a cache key based on the request body + const cacheKey = JSON.stringify(requestBody); + + // Check if a cached response exists for the request + if (cacheTimeoutSeconds) { + const cachedResponse = getFromCache(cacheKey); + if (cachedResponse) { + return cachedResponse; + } + } + + // Set up retry mechanism with exponential backoff + let retryAttempts = interfaceOptions.retryAttempts || 0; + let currentRetry = 0; + + while (retryAttempts >= 0) { + try { + // Send the request to the Together AI API + const response = await this.client.post( + '/v1/models/generate', + requestBody, + ); + + // Extract the response content from the API response + let responseContent = null; + if ( + response && + response.data && + response.data.results && + response.data.results[0] && + response.data.results[0].generatedText + ) { + responseContent = response.data.results[0].generatedText; + } + + // Attempt to repair the object if needed + if (interfaceOptions.attemptJsonRepair) { + responseContent = JSON.parse(responseContent); + } + + // Build response object + responseContent = { results: responseContent }; + + // Cache the response content if cache timeout is set + if (cacheTimeoutSeconds && responseContent) { + saveToCache(cacheKey, responseContent, cacheTimeoutSeconds); + } + + // Return the response content + return responseContent; + } catch (error) { + // Decrease the number of retry attempts + retryAttempts--; + if (retryAttempts < 0) { + // Log any errors and throw the error + log.error( + 'Response data:', + error.response ? error.response.data : null, + ); + throw error; + } + + // Calculate the delay for the next retry attempt + let retryMultiplier = interfaceOptions.retryMultiplier || 0.3; + const delay = (currentRetry + 1) * retryMultiplier * 1000; + + // Wait for the specified delay before retrying + await new Promise((resolve) => setTimeout(resolve, delay)); + currentRetry++; + } + } + } +} + +// Adjust model alias for backwards compatibility +TogetherAI.prototype.adjustModelAlias = adjustModelAlias; + +module.exports = TogetherAI; diff --git a/src/utils/adjustModelAlias.js b/src/utils/adjustModelAlias.js new file mode 100644 index 0000000..977e602 --- /dev/null +++ b/src/utils/adjustModelAlias.js @@ -0,0 +1,28 @@ +/** + * Adjusts model alias values + * + * @param {string} alias - The model alias to update (e.g., "default", "large", "small"). + * @param {string} name - The new model name to set. + * @param {number} [tokens=null] - The optional token limit for the new model. + * @returns {boolean} - Returns true if the update was successful, otherwise false. + */ +function adjustModelAlias(alias, name, tokens = null) { + if ( + !this.interfaceName || + !config[this.interfaceName] || + !config[this.interfaceName].model || + !config[this.interfaceName].model[alias] + ) { + return false; + } + + const model = { name }; + if (tokens !== null) { + model.tokens = tokens; + } + + config[this.interfaceName].model[alias] = model; + return true; +} + +module.exports = { adjustModelAlias }; diff --git a/src/utils/cache.js b/src/utils/cache.js index 79fa124..e5385e9 100644 --- a/src/utils/cache.js +++ b/src/utils/cache.js @@ -1,18 +1,13 @@ /** * @file src/utils/cache.js - * @description Wrapper for flat-cache. + * @description Wrapper for flat-cache; only loads flat-cache when used, stored in a singleton. */ -const flatCache = require('flat-cache'); const path = require('path'); const crypto = require('crypto'); -// Name of the cache file -const cacheId = 'llm-interface-cache'; -const cacheDir = path.resolve(__dirname, '..', 'cache'); - -// Load the cache -const cache = flatCache.load(cacheId, cacheDir); +// Singleton to store the cache instance +let cacheInstance = null; /** * Converts a key to an MD5 hash. @@ -24,6 +19,21 @@ function getCacheFilePath(key) { return crypto.createHash('md5').update(key).digest('hex'); } +/** + * Loads the cache dynamically and stores it in the singleton if not already loaded. + * + * @returns {object} The flat-cache instance. + */ +function getCacheInstance() { + if (!cacheInstance) { + const flatCache = require('flat-cache'); + const cacheId = 'LLMInterface-cache'; + const cacheDir = path.resolve(__dirname, '../..', 'cache'); + cacheInstance = flatCache.load(cacheId, cacheDir); + } + return cacheInstance; +} + /** * Retrieves data from the cache. * @@ -31,6 +41,7 @@ function getCacheFilePath(key) { * @returns {any} The cached data or null if not found. */ function getFromCache(key) { + const cache = getCacheInstance(); const hashedKey = getCacheFilePath(key); return cache.getKey(hashedKey) || null; } @@ -42,6 +53,7 @@ function getFromCache(key) { * @param {any} data - The data to cache. */ function saveToCache(key, data) { + const cache = getCacheInstance(); const hashedKey = getCacheFilePath(key); cache.setKey(hashedKey, data); cache.save(true); // Save to disk diff --git a/test/utils/defaults.js b/src/utils/defaults.js similarity index 100% rename from test/utils/defaults.js rename to src/utils/defaults.js diff --git a/test/utils/jestSerializer.js b/src/utils/jestSerializer.js similarity index 100% rename from test/utils/jestSerializer.js rename to src/utils/jestSerializer.js diff --git a/test/utils/suppressLogs.js b/src/utils/suppressLogs.js similarity index 100% rename from test/utils/suppressLogs.js rename to src/utils/suppressLogs.js diff --git a/src/utils/utils.js b/src/utils/utils.js index b6b9c91..7a665a5 100644 --- a/src/utils/utils.js +++ b/src/utils/utils.js @@ -66,9 +66,52 @@ function returnModelByAlias(provider, model) { return model; } +let jsonrepairInstance = null; + +/** + * Loads the jsonrepair dynamically and stores it in the singleton if not already loaded. + * + * @returns {Promise} A promise that resolves to the jsonrepair instance. + */ +async function getJsonRepairInstance() { + if (!jsonrepairInstance) { + const { jsonrepair } = await import('jsonrepair'); + jsonrepairInstance = jsonrepair; + } + return jsonrepairInstance; +} + +/** + * Attempts to parse a JSON string. If parsing fails and attemptRepair is true, + * it uses jsonrepair to try repairing the JSON string. + * + * @param {string} json - The JSON string to parse. + * @param {boolean} attemptRepair - Whether to attempt repairing the JSON string if parsing fails. + * @returns {Promise} - The parsed or repaired JSON object, or null if parsing and repair both fail. + */ +async function parseJSON(json, attemptRepair) { + try { + const parsed = JSON.parse(json); + return parsed; + } catch (e) { + if (attemptRepair) { + try { + const jsonrepair = await getJsonRepairInstance(); + const repaired = jsonrepair(json); + const reparsed = JSON.parse(repaired); + return reparsed; + } catch (importError) { + return null; + } + } else { + return null; + } + } +} module.exports = { returnMessageObject, returnSimpleMessageObject, returnModelByAlias, + parseJSON, }; diff --git a/test/basic/ai21.test.js b/test/basic/ai21.test.js index 9f5e590..dfffef6 100644 --- a/test/basic/ai21.test.js +++ b/test/basic/ai21.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('AI21 Basic', () => { if (ai21ApiKey) { @@ -36,11 +36,11 @@ describe('AI21 Basic', () => { }; response = await ai21.sendMessage(message, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/basic/anthropic.test.js b/test/basic/anthropic.test.js index 0525585..0d51cc2 100644 --- a/test/basic/anthropic.test.js +++ b/test/basic/anthropic.test.js @@ -9,8 +9,8 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); -const { safeStringify } = require('../utils/jestSerializer.js'); // Adjust the path if necessary +} = require('../../src/utils/defaults.js'); +const { safeStringify } = require('../../src/utils/jestSerializer.js'); // Adjust the path if necessary describe('Anthropic Basic', () => { if (anthropicApiKey) { @@ -43,14 +43,14 @@ describe('Anthropic Basic', () => { try { response = await anthropic.sendMessage(message, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); } catch (error) { throw new Error(`Test failed: ${safeStringify(error)}`); } }, 30000); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/basic/cloudflareai.test.js b/test/basic/cloudflareai.test.js new file mode 100644 index 0000000..5b1bf94 --- /dev/null +++ b/test/basic/cloudflareai.test.js @@ -0,0 +1,55 @@ +/** + * @file test/basic/cloudflareai.test.js + * @description Tests for the CloudflareAI API client. + */ + +const CloudflareAI = require('../../src/interfaces/cloudflareai.js'); +const { + cloudflareaiApiKey, + cloudflareaiAccountId, +} = require('../../src/config/config.js'); +const { + simplePrompt, + options, + expectedMaxLength, +} = require('../../src/utils/defaults.js'); + +describe('CloudflareAI Basic', () => { + if (cloudflareaiApiKey) { + let response; + + test('API Key should be set', () => { + expect(typeof cloudflareaiApiKey).toBe('string'); + }); + + test('API Client should send a message and receive a response', async () => { + const cloudflareai = new CloudflareAI( + cloudflareaiApiKey, + cloudflareaiAccountId, + ); + const message = { + model: '@cf/meta/llama-3-8b-instruct', // Replace with the appropriate model name + messages: [ + { + role: 'system', + content: 'You are a helpful assistant.', + }, + { + role: 'user', + content: simplePrompt, + }, + ], + }; + + response = await cloudflareai.sendMessage(message, options); + + expect(typeof response).toStrictEqual('object'); + }, 30000); + + test(`Response should be less than ${expectedMaxLength} characters`, () => { + expect(response.results.length).toBeLessThan(expectedMaxLength); + }); + } else { + test.skip(`API Key is not set`, () => {}); + } +}); diff --git a/test/basic/cohere.test.js b/test/basic/cohere.test.js index e3bc314..8432f05 100644 --- a/test/basic/cohere.test.js +++ b/test/basic/cohere.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('Cohere Basic', () => { if (cohereApiKey) { @@ -40,10 +40,10 @@ describe('Cohere Basic', () => { }; response = await cohere.sendMessage(message, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }, 30000); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/basic/fireworksai.test.js b/test/basic/fireworksai.test.js new file mode 100644 index 0000000..f5fdd1d --- /dev/null +++ b/test/basic/fireworksai.test.js @@ -0,0 +1,48 @@ +/** + * @file test/basic/fireworksai.test.js + * @description Tests for the FireworksAI API client. + */ + +const FireworksAI = require('../../src/interfaces/fireworksai.js'); +const { fireworksaiApiKey } = require('../../src/config/config.js'); +const { + simplePrompt, + options, + expectedMaxLength, +} = require('../../src/utils/defaults.js'); + +describe('FireworksAI Basic', () => { + if (fireworksaiApiKey) { + let response; + + test('API Key should be set', () => { + expect(typeof fireworksaiApiKey).toBe('string'); + }); + + test('API Client should send a message and receive a response', async () => { + const fireworksaiAI = new FireworksAI(fireworksaiApiKey); + const message = { + model: 'accounts/fireworks/models/phi-3-mini-128k-instruct', + messages: [ + { + role: 'system', + content: 'You are a helpful assistant.', + }, + { + role: 'user', + content: simplePrompt, + }, + ], + }; + + response = await fireworksaiAI.sendMessage(message, options); + expect(typeof response).toStrictEqual('object'); + }); + + test(`Response should be less than ${expectedMaxLength} characters`, async () => { + expect(response.results.length).toBeLessThan(expectedMaxLength); + }); + } else { + test.skip(`API Key is not set`, () => {}); + } +}); diff --git a/test/basic/gemini.test.js b/test/basic/gemini.test.js index 9285e3e..3811925 100644 --- a/test/basic/gemini.test.js +++ b/test/basic/gemini.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('Gemini Basic', () => { if (geminiApiKey) { let response; @@ -34,10 +34,10 @@ describe('Gemini Basic', () => { }; response = await gemini.sendMessage(message, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/basic/goose.test.js b/test/basic/gooseai.test.js similarity index 67% rename from test/basic/goose.test.js rename to test/basic/gooseai.test.js index f86176b..617ab38 100644 --- a/test/basic/goose.test.js +++ b/test/basic/gooseai.test.js @@ -3,23 +3,23 @@ * @description Tests for the Goose AI API client. */ -const Goose = require('../../src/interfaces/goose.js'); -const { gooseApiKey } = require('../../src/config/config.js'); +const GooseAI = require('../../src/interfaces/gooseai.js'); +const { gooseaiApiKey } = require('../../src/config/config.js'); const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('Goose AI Basic', () => { - if (gooseApiKey) { + if (gooseaiApiKey) { let response; test('API Key should be set', async () => { - expect(typeof gooseApiKey).toBe('string'); + expect(typeof gooseaiApiKey).toBe('string'); }); test('API Client should send a message and receive a response', async () => { - const goose = new Goose(gooseApiKey); + const goose = new GooseAI(gooseaiApiKey); const message = { model: 'gpt-neo-20b', messages: [ @@ -35,11 +35,11 @@ describe('Goose AI Basic', () => { }; response = await goose.sendMessage(message, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }, 30000); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/basic/groq.test.js b/test/basic/groq.test.js index 368bf47..5e8a424 100644 --- a/test/basic/groq.test.js +++ b/test/basic/groq.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('Groq Basic', () => { if (groqApiKey) { let response; @@ -35,11 +35,11 @@ describe('Groq Basic', () => { }; response = await groq.sendMessage(message, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/basic/huggingface.test.js b/test/basic/huggingface.test.js index a93dbf3..8e9828d 100644 --- a/test/basic/huggingface.test.js +++ b/test/basic/huggingface.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('HuggingFace Basic', () => { if (huggingfaceApiKey) { @@ -37,7 +37,7 @@ describe('HuggingFace Basic', () => { try { response = await huggingface.sendMessage(message, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); } catch (error) { console.error('Test failed:', error); throw error; @@ -45,7 +45,7 @@ describe('HuggingFace Basic', () => { }, 30000); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/basic/llamacpp.test.js b/test/basic/llamacpp.test.js index 99aa08a..5358dea 100644 --- a/test/basic/llamacpp.test.js +++ b/test/basic/llamacpp.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); const axios = require('axios'); describe('LlamaCPP Basic', () => { if (llamaURL) { @@ -54,11 +54,11 @@ describe('LlamaCPP Basic', () => { }; response = await llamacpp.sendMessage(message, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }, 30000); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/basic/mistral.test.js b/test/basic/mistralai.test.js similarity index 54% rename from test/basic/mistral.test.js rename to test/basic/mistralai.test.js index 1867c5b..2651ee8 100644 --- a/test/basic/mistral.test.js +++ b/test/basic/mistralai.test.js @@ -1,25 +1,25 @@ /** - * @file test/basic/mistral.test.js - * @description Tests for the Mistral API client. + * @file test/basic/mistralai.test.js + * @description Tests for the MistralAI API client. */ -const Mistral = require('../../src/interfaces/mistral.js'); -const { mistralApiKey } = require('../../src/config/config.js'); +const MistralAI = require('../../src/interfaces/mistralai.js'); +const { mistralaiApiKey } = require('../../src/config/config.js'); const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); -describe('Mistral Basic', () => { - if (mistralApiKey) { +} = require('../../src/utils/defaults.js'); +describe('MistralAI Basic', () => { + if (mistralaiApiKey) { let response; test('API Key should be set', async () => { - expect(typeof mistralApiKey).toBe('string'); + expect(typeof mistralaiApiKey).toBe('string'); }); test('API Client should send a message and receive a response', async () => { - const mistral = new Mistral(mistralApiKey); + const mistralai = new MistralAI(mistralaiApiKey); const message = { model: 'mistral-large-latest', messages: [ @@ -31,16 +31,16 @@ describe('Mistral Basic', () => { ], }; try { - response = await mistral.sendMessage(message, options); + response = await mistralai.sendMessage(message, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); } catch (error) { throw new Error(`Test failed: ${error}`); } }, 30000); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/basic/openai.test.js b/test/basic/openai.test.js index 8a23364..d2f3aa6 100644 --- a/test/basic/openai.test.js +++ b/test/basic/openai.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('OpenAI Basic', () => { if (openaiApiKey) { let response; @@ -35,11 +35,11 @@ describe('OpenAI Basic', () => { }; response = await openai.sendMessage(message, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/basic/perplexity.test.js b/test/basic/perplexity.test.js index ee03911..ff29f0c 100644 --- a/test/basic/perplexity.test.js +++ b/test/basic/perplexity.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('Perplexity Basic', () => { if (perplexityApiKey) { let response; @@ -35,11 +35,11 @@ describe('Perplexity Basic', () => { }; response = await perplixity.sendMessage(message, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/basic/reka.test.js b/test/basic/rekaai.test.js similarity index 70% rename from test/basic/reka.test.js rename to test/basic/rekaai.test.js index 35493fd..ac4f7d4 100644 --- a/test/basic/reka.test.js +++ b/test/basic/rekaai.test.js @@ -3,23 +3,23 @@ * @description Tests for the Reka AI API client. */ -const Reka = require('../../src/interfaces/reka.js'); -const { rekaApiKey } = require('../../src/config/config.js'); +const RekaAI = require('../../src/interfaces/rekaai.js'); +const { rekaaiApiKey } = require('../../src/config/config.js'); const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); -describe('Reka Basic', () => { - if (rekaApiKey) { +} = require('../../src/utils/defaults.js'); +describe('RekaAI Basic', () => { + if (rekaaiApiKey) { let response; test('API Key should be set', async () => { - expect(typeof rekaApiKey).toBe('string'); + expect(typeof rekaaiApiKey).toBe('string'); }); test('API Client should send a message and receive a response', async () => { - const reka = new Reka(rekaApiKey); + const reka = new RekaAI(rekaaiApiKey); const message = { model: 'reka-core', messages: [ @@ -40,7 +40,7 @@ describe('Reka Basic', () => { }; try { response = await reka.sendMessage(message, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); } catch (error) { console.error('Test failed:', error); throw error; @@ -48,7 +48,7 @@ describe('Reka Basic', () => { }, 30000); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/cache/ai21.test.js b/test/cache/ai21.test.js index 1fd9e80..9fc7b7a 100644 --- a/test/cache/ai21.test.js +++ b/test/cache/ai21.test.js @@ -9,9 +9,9 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); const { getFromCache, saveToCache } = require('../../src/utils/cache.js'); -const suppressLogs = require('../utils/suppressLogs.js'); +const suppressLogs = require('../../src/utils/suppressLogs.js'); jest.mock('../../src/utils/cache.js'); describe('AI21 Caching', () => { @@ -53,7 +53,7 @@ describe('AI21 Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(cachedResponse); + expect(response).toStrictEqual(cachedResponse); expect(saveToCache).not.toHaveBeenCalled(); }); @@ -70,9 +70,14 @@ describe('AI21 Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(apiResponse); - expect(saveToCache).toHaveBeenCalledWith(cacheKey, apiResponse, 60); + expect(response.results).toBe(apiResponse); + expect(saveToCache).toHaveBeenCalledWith( + cacheKey, + { results: apiResponse }, + 60, + ); }); + test( 'Should respond with prompt API error messaging', suppressLogs(async () => { @@ -86,7 +91,7 @@ describe('AI21 Caching', () => { ).rejects.toThrow('API error'); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(saveToCache).not.toHaveBeenCalled(); + expect(saveToCache).not.toHaveBeenCalled(); // Corrected usage }), ); } else { diff --git a/test/cache/anthropic.test.js b/test/cache/anthropic.test.js index 3e07b3b..736b510 100644 --- a/test/cache/anthropic.test.js +++ b/test/cache/anthropic.test.js @@ -9,9 +9,9 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); const { getFromCache, saveToCache } = require('../../src/utils/cache.js'); -const suppressLogs = require('../utils/suppressLogs.js'); +const suppressLogs = require('../../src/utils/suppressLogs.js'); jest.mock('../../src/utils/cache.js'); describe('Anthropic Caching', () => { @@ -65,7 +65,7 @@ describe('Anthropic Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(cachedResponse); + expect(response).toStrictEqual(cachedResponse); expect(saveToCache).not.toHaveBeenCalled(); }); @@ -82,8 +82,12 @@ describe('Anthropic Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(apiResponse); - expect(saveToCache).toHaveBeenCalledWith(cacheKey, apiResponse, 60); + expect(response.results).toBe(apiResponse); + expect(saveToCache).toHaveBeenCalledWith( + cacheKey, + { results: apiResponse }, + 60, + ); }); test( 'Should respond with prompt API error messaging', diff --git a/test/cache/cohere.test.js b/test/cache/cohere.test.js index 3b8e1ac..5de5d2d 100644 --- a/test/cache/cohere.test.js +++ b/test/cache/cohere.test.js @@ -9,9 +9,9 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); const { getFromCache, saveToCache } = require('../../src/utils/cache.js'); -const suppressLogs = require('../utils/suppressLogs.js'); +const suppressLogs = require('../../src/utils/suppressLogs.js'); jest.mock('../../src/utils/cache.js'); describe('Cohere Caching', () => { @@ -64,7 +64,7 @@ describe('Cohere Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(cachedResponse); + expect(response).toStrictEqual(cachedResponse); expect(saveToCache).not.toHaveBeenCalled(); }); @@ -81,8 +81,12 @@ describe('Cohere Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(apiResponse); - expect(saveToCache).toHaveBeenCalledWith(cacheKey, apiResponse, 60); + expect(response.results).toBe(apiResponse); + expect(saveToCache).toHaveBeenCalledWith( + cacheKey, + { results: apiResponse }, + 60, + ); }); test( 'Should respond with prompt API error messaging', diff --git a/test/cache/gemini.test.js b/test/cache/gemini.test.js index fbaffc8..c29c637 100644 --- a/test/cache/gemini.test.js +++ b/test/cache/gemini.test.js @@ -9,9 +9,9 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); const { getFromCache, saveToCache } = require('../../src/utils/cache.js'); -const suppressLogs = require('../utils/suppressLogs.js'); +const suppressLogs = require('../../src/utils/suppressLogs.js'); jest.mock('../../src/utils/cache.js'); describe('Gemini Caching', () => { @@ -58,7 +58,7 @@ describe('Gemini Caching', () => { ); expect(getFromCache).toHaveBeenCalledWith(createCacheKey(100)); - expect(response).toBe(cachedResponse); + expect(response).toStrictEqual(cachedResponse); expect(saveToCache).not.toHaveBeenCalled(); }); @@ -84,10 +84,10 @@ describe('Gemini Caching', () => { ); expect(getFromCache).toHaveBeenCalledWith(createCacheKey(100)); - expect(response).toBe(apiResponse); + expect(response.results).toBe(apiResponse); expect(saveToCache).toHaveBeenCalledWith( createCacheKey(100), - apiResponse, + { results: apiResponse }, 60, ); }); diff --git a/test/cache/goose.test.js b/test/cache/gooseai.test.js similarity index 77% rename from test/cache/goose.test.js rename to test/cache/gooseai.test.js index d731399..b2380f2 100644 --- a/test/cache/goose.test.js +++ b/test/cache/gooseai.test.js @@ -1,22 +1,22 @@ /** * @file test/cache/goose.test.js - * @description Tests for the caching mechanism in the Goose class. + * @description Tests for the caching mechanism in the GooseAI class. */ -const Goose = require('../../src/interfaces/goose.js'); -const { gooseApiKey } = require('../../src/config/config.js'); +const GooseAI = require('../../src/interfaces/gooseai.js'); +const { gooseaiApiKey } = require('../../src/config/config.js'); const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); const { getFromCache, saveToCache } = require('../../src/utils/cache.js'); -const suppressLogs = require('../utils/suppressLogs.js'); +const suppressLogs = require('../../src/utils/suppressLogs.js'); jest.mock('../../src/utils/cache.js'); -describe('Goose Caching', () => { - if (gooseApiKey) { - const goose = new Goose(gooseApiKey); +describe('GooseAI Caching', () => { + if (gooseaiApiKey) { + const goose = new GooseAI(gooseaiApiKey); const message = { model: 'gpt-neo-20b', @@ -45,7 +45,7 @@ describe('Goose Caching', () => { }); test('API Key should be set', async () => { - expect(typeof gooseApiKey).toBe('string'); + expect(typeof gooseaiApiKey).toBe('string'); }); test('API should return cached response if available', async () => { @@ -57,7 +57,7 @@ describe('Goose Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(cachedResponse); + expect(response).toStrictEqual(cachedResponse); expect(saveToCache).not.toHaveBeenCalled(); }); @@ -74,8 +74,12 @@ describe('Goose Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(apiResponse); - expect(saveToCache).toHaveBeenCalledWith(cacheKey, apiResponse, 60); + expect(response.results).toBe(apiResponse); + expect(saveToCache).toHaveBeenCalledWith( + cacheKey, + { results: apiResponse }, + 60, + ); }); test( 'Should respond with prompt API error messaging', diff --git a/test/cache/groq.test.js b/test/cache/groq.test.js index 9e6a544..7720d56 100644 --- a/test/cache/groq.test.js +++ b/test/cache/groq.test.js @@ -9,9 +9,9 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); const { getFromCache, saveToCache } = require('../../src/utils/cache.js'); -const suppressLogs = require('../utils/suppressLogs.js'); +const suppressLogs = require('../../src/utils/suppressLogs.js'); jest.mock('../../src/utils/cache.js'); describe('Groq Caching', () => { @@ -53,7 +53,7 @@ describe('Groq Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(cachedResponse); + expect(response).toStrictEqual(cachedResponse); expect(saveToCache).not.toHaveBeenCalled(); }); @@ -70,8 +70,12 @@ describe('Groq Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(apiResponse); - expect(saveToCache).toHaveBeenCalledWith(cacheKey, apiResponse, 60); + expect(response.results).toBe(apiResponse); + expect(saveToCache).toHaveBeenCalledWith( + cacheKey, + { results: apiResponse }, + 60, + ); }); test( 'Should respond with prompt API error messaging', diff --git a/test/cache/huggingface.test.js b/test/cache/huggingface.test.js index 6793666..5e1915b 100644 --- a/test/cache/huggingface.test.js +++ b/test/cache/huggingface.test.js @@ -4,9 +4,9 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); const { getFromCache, saveToCache } = require('../../src/utils/cache.js'); -const suppressLogs = require('../utils/suppressLogs.js'); +const suppressLogs = require('../../src/utils/suppressLogs.js'); jest.mock('../../src/utils/cache.js'); describe('HuggingFace Caching', () => { @@ -50,7 +50,7 @@ describe('HuggingFace Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(mockResponse[0].generated_text); + expect(response).toStrictEqual(mockResponse[0].generated_text); expect(saveToCache).not.toHaveBeenCalled(); }); @@ -73,12 +73,9 @@ describe('HuggingFace Caching', () => { inputs: inputs, parameters: { max_new_tokens: options.max_tokens }, // Ensure the correct value is expected }); - expect(response).toBe(mockResponse[0].generated_text); - expect(saveToCache).toHaveBeenCalledWith( - cacheKey, - mockResponse[0].generated_text, - 60, - ); + const expectedResult = { results: mockResponse[0].generated_text }; + expect(response).toStrictEqual(expectedResult); + expect(saveToCache).toHaveBeenCalledWith(cacheKey, expectedResult, 60); }); test( diff --git a/test/cache/llamacpp.test.js b/test/cache/llamacpp.test.js index 5d14a47..9d9c2ac 100644 --- a/test/cache/llamacpp.test.js +++ b/test/cache/llamacpp.test.js @@ -9,9 +9,9 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); const { getFromCache, saveToCache } = require('../../src/utils/cache.js'); -const suppressLogs = require('../utils/suppressLogs.js'); +const suppressLogs = require('../../src/utils/suppressLogs.js'); jest.mock('../../src/utils/cache.js'); describe('LlamaCPP Caching', () => { @@ -55,7 +55,7 @@ describe('LlamaCPP Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(cachedResponse); + expect(response).toStrictEqual(cachedResponse); expect(saveToCache).not.toHaveBeenCalled(); }); @@ -72,8 +72,12 @@ describe('LlamaCPP Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(apiResponse); - expect(saveToCache).toHaveBeenCalledWith(cacheKey, apiResponse, 60); + expect(response.results).toBe(apiResponse); + expect(saveToCache).toHaveBeenCalledWith( + cacheKey, + { results: apiResponse }, + 60, + ); }); test( 'Should respond with prompt API error messaging', diff --git a/test/cache/mistral.test.js b/test/cache/mistralai.test.js similarity index 64% rename from test/cache/mistral.test.js rename to test/cache/mistralai.test.js index 45c6fc8..2c9e494 100644 --- a/test/cache/mistral.test.js +++ b/test/cache/mistralai.test.js @@ -1,25 +1,25 @@ /** - * @file test/cache/mistral.test.js - * @description Tests for the caching mechanism in the Mistral class. + * @file test/cache/mistralai.test.js + * @description Tests for the caching mechanism in the MistralAI class. */ -const Mistral = require('../../src/interfaces/mistral'); -const { mistralApiKey } = require('../../src/config/config.js'); +const MistralAI = require('../../src/interfaces/mistralai.js'); +const { mistralaiApiKey } = require('../../src/config/config.js'); const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); const { getFromCache, saveToCache } = require('../../src/utils/cache.js'); -const suppressLogs = require('../utils/suppressLogs.js'); +const suppressLogs = require('../../src/utils/suppressLogs.js'); jest.mock('../../src/utils/cache.js'); -describe('Mistral Caching', () => { - if (mistralApiKey) { - const mistral = new Mistral(mistralApiKey); +describe('MistralAI Caching', () => { + if (mistralaiApiKey) { + const mistralai = new MistralAI(mistralaiApiKey); const message = { - model: 'mistral-1.0', + model: 'mistralai-1.0', messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { @@ -41,19 +41,19 @@ describe('Mistral Caching', () => { }); test('API Key should be set', async () => { - expect(typeof mistralApiKey).toBe('string'); + expect(typeof mistralaiApiKey).toBe('string'); }); test('API should return cached response if available', async () => { const cachedResponse = 'Cached response'; getFromCache.mockReturnValue(cachedResponse); - const response = await mistral.sendMessage(message, options, { + const response = await mistralai.sendMessage(message, options, { cacheTimeoutSeconds: 60, }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(cachedResponse); + expect(response).toStrictEqual(cachedResponse); expect(saveToCache).not.toHaveBeenCalled(); }); @@ -61,28 +61,32 @@ describe('Mistral Caching', () => { getFromCache.mockReturnValue(null); const apiResponse = 'API response'; - mistral.client.post = jest.fn().mockResolvedValue({ + mistralai.client.post = jest.fn().mockResolvedValue({ data: { choices: [{ message: { content: apiResponse } }] }, }); - const response = await mistral.sendMessage(message, options, { + const response = await mistralai.sendMessage(message, options, { cacheTimeoutSeconds: 60, }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(apiResponse); - expect(saveToCache).toHaveBeenCalledWith(cacheKey, apiResponse, 60); + expect(response.results).toBe(apiResponse); + expect(saveToCache).toHaveBeenCalledWith( + cacheKey, + { results: apiResponse }, + 60, + ); }); test( 'Should respond with prompt API error messaging', suppressLogs(async () => { getFromCache.mockReturnValue(null); - mistral.client.post = jest + mistralai.client.post = jest .fn() .mockRejectedValue(new Error('API error')); await expect( - mistral.sendMessage(message, options, { + mistralai.sendMessage(message, options, { cacheTimeoutSeconds: 60, }), ).rejects.toThrow('API error'); diff --git a/test/cache/openai.test.js b/test/cache/openai.test.js index c6dd083..4e1eeda 100644 --- a/test/cache/openai.test.js +++ b/test/cache/openai.test.js @@ -9,9 +9,9 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); const { getFromCache, saveToCache } = require('../../src/utils/cache.js'); -const suppressLogs = require('../utils/suppressLogs.js'); +const suppressLogs = require('../../src/utils/suppressLogs.js'); jest.mock('../../src/utils/cache.js'); describe('OpenAI Caching', () => { @@ -53,7 +53,7 @@ describe('OpenAI Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(cachedResponse); + expect(response).toStrictEqual(cachedResponse); expect(saveToCache).not.toHaveBeenCalled(); }); @@ -70,8 +70,12 @@ describe('OpenAI Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(apiResponse); - expect(saveToCache).toHaveBeenCalledWith(cacheKey, apiResponse, 60); + expect(response.results).toBe(apiResponse); + expect(saveToCache).toHaveBeenCalledWith( + cacheKey, + { results: apiResponse }, + 60, + ); }); test( 'Should respond with prompt API error messaging', diff --git a/test/cache/perplexity.test.js b/test/cache/perplexity.test.js index e3773bb..a6b599c 100644 --- a/test/cache/perplexity.test.js +++ b/test/cache/perplexity.test.js @@ -9,9 +9,9 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); const { getFromCache, saveToCache } = require('../../src/utils/cache.js'); -const suppressLogs = require('../utils/suppressLogs.js'); +const suppressLogs = require('../../src/utils/suppressLogs.js'); jest.mock('../../src/utils/cache.js'); describe('Perplexity API Caching', () => { @@ -53,7 +53,7 @@ describe('Perplexity API Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(cachedResponse); + expect(response).toStrictEqual(cachedResponse); expect(saveToCache).not.toHaveBeenCalled(); }); @@ -70,8 +70,12 @@ describe('Perplexity API Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(apiResponse); - expect(saveToCache).toHaveBeenCalledWith(cacheKey, apiResponse, 60); + expect(response.results).toBe(apiResponse); + expect(saveToCache).toHaveBeenCalledWith( + cacheKey, + { results: apiResponse }, + 60, + ); }); test( 'Should respond with prompt API error messaging', diff --git a/test/cache/reka.test.js b/test/cache/rekaai.test.js similarity index 77% rename from test/cache/reka.test.js rename to test/cache/rekaai.test.js index f101846..14abe08 100644 --- a/test/cache/reka.test.js +++ b/test/cache/rekaai.test.js @@ -1,22 +1,22 @@ /** * @file test/cache/reka.test.js - * @description Tests for the caching mechanism in the Reka class. + * @description Tests for the caching mechanism in the RekaAI class. */ -const Reka = require('../../src/interfaces/reka.js'); -const { rekaApiKey } = require('../../src/config/config.js'); +const RekaAI = require('../../src/interfaces/rekaai.js'); +const { rekaaiApiKey } = require('../../src/config/config.js'); const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); const { getFromCache, saveToCache } = require('../../src/utils/cache.js'); -const suppressLogs = require('../utils/suppressLogs.js'); +const suppressLogs = require('../../src/utils/suppressLogs.js'); jest.mock('../../src/utils/cache.js'); -describe('Reka Caching', () => { - if (rekaApiKey) { - const reka = new Reka(rekaApiKey); +describe('RekaAI Caching', () => { + if (rekaaiApiKey) { + const reka = new RekaAI(rekaaiApiKey); const message = { model: 'reka-core', @@ -49,11 +49,11 @@ describe('Reka Caching', () => { }); test('API Key should be set', async () => { - expect(typeof rekaApiKey).toBe('string'); + expect(typeof rekaaiApiKey).toBe('string'); }); test('API should return cached response if available', async () => { - const cachedResponse = 'Cached response'; + const cachedResponse = { results: 'Cached response' }; getFromCache.mockReturnValue(cachedResponse); const response = await reka.sendMessage(message, options, { @@ -61,7 +61,7 @@ describe('Reka Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(cachedResponse); + expect(response).toStrictEqual(cachedResponse); expect(saveToCache).not.toHaveBeenCalled(); }); @@ -82,8 +82,12 @@ describe('Reka Caching', () => { }); expect(getFromCache).toHaveBeenCalledWith(cacheKey); - expect(response).toBe(apiResponse); - expect(saveToCache).toHaveBeenCalledWith(cacheKey, apiResponse, 60); + expect(response.results).toBe(apiResponse); + expect(saveToCache).toHaveBeenCalledWith( + cacheKey, + { results: apiResponse }, + 60, + ); }); test( 'Should respond with prompt API error messaging', diff --git a/test/json/gemini.test.js b/test/json/gemini.test.js index f0ffe63..1e960dd 100644 --- a/test/json/gemini.test.js +++ b/test/json/gemini.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('Gemini JSON', () => { if (geminiApiKey) { test('API Key should be set', async () => { @@ -32,10 +32,10 @@ describe('Gemini JSON', () => { ], }; const response = await gemini.sendMessage(message, { - max_tokens: options.max_tokens, + max_tokens: options.max_tokens * 2, response_format: 'json_object', }); - expect(typeof response).toBe('object'); + expect(typeof response).toStrictEqual('object'); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/json/openai.jsonrepair.test.js b/test/json/openai.jsonrepair.test.js new file mode 100644 index 0000000..d1dceef --- /dev/null +++ b/test/json/openai.jsonrepair.test.js @@ -0,0 +1,49 @@ +/** + * @file test/json/openai.test.js + * @description Tests for the OpenAI API client JSON output. + */ + +const OpenAI = require('../../src/interfaces/openai.js'); +const { openaiApiKey } = require('../../src/config/config.js'); +const { + simplePrompt, + options, + expectedMaxLength, +} = require('../../src/utils/defaults.js'); + +describe('OpenAI JSON', () => { + if (openaiApiKey) { + test('API Key should be set', async () => { + expect(typeof openaiApiKey).toBe('string'); + }); + + test('API Client should send a message and receive a JSON response', async () => { + const openai = new OpenAI(openaiApiKey); + const message = { + model: 'gpt-3.5-turbo', + messages: [ + { + role: 'system', + content: 'You are a helpful assistant.', + }, + { + role: 'user', + content: `${simplePrompt} Provide 5 result items. Return the results as a JSON object. Follow this format: [{reason, reasonDescription}]`, + }, + ], + }; + const response = await openai.sendMessage( + message, + { + max_tokens: options.max_tokens, + response_format: 'json_object', + }, + { attemptJsonRepair: true }, + ); + + expect(typeof response).toStrictEqual('object'); + }); + } else { + test.skip(`API Key is not set`, () => {}); + } +}); diff --git a/test/json/openai.test.js b/test/json/openai.test.js index 6b03750..59724fe 100644 --- a/test/json/openai.test.js +++ b/test/json/openai.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('OpenAI JSON', () => { if (openaiApiKey) { @@ -33,11 +33,11 @@ describe('OpenAI JSON', () => { ], }; const response = await openai.sendMessage(message, { - max_tokens: options.max_tokens, + max_tokens: options.max_tokens * 2, response_format: 'json_object', }); - expect(typeof response).toBe('object'); + expect(typeof response).toStrictEqual('object'); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/main/llmInterfaceSendMessage.test.js b/test/main/llmInterfaceSendMessage.test.js index 203011d..b12a94f 100644 --- a/test/main/llmInterfaceSendMessage.test.js +++ b/test/main/llmInterfaceSendMessage.test.js @@ -5,43 +5,52 @@ const { LLMInterfaceSendMessage } = require('../../src/index.js'); const config = require('../../src/config/config.js'); -const { - simplePrompt, - options, - expectedMaxLength, -} = require('../utils/defaults.js'); +const { simplePrompt, options } = require('../../src/utils/defaults.js'); let modules = { openai: config.openaiApiKey, anthropic: config.anthropicApiKey, gemini: config.geminiApiKey, llamacpp: config.llamaURL, - reka: config.rekaApiKey, + rekaai: config.rekaaiApiKey, groq: config.groqApiKey, - goose: config.gooseApiKey, + gooseai: config.gooseaiApiKey, cohere: config.cohereApiKey, - mistral: config.mistralApiKey, + mistralai: config.mistralaiApiKey, huggingface: config.huggingfaceApiKey, ai21: config.ai21ApiKey, perplexity: config.perplexityApiKey, + cloudflareai: [config.cloudflareaiApiKey, config.cloudflareaiAccountId], + fireworksai: config.fireworksaiApiKey, }; -for (const [module, apiKey] of Object.entries(modules)) { +for (let [module, apiKey] of Object.entries(modules)) { if (apiKey) { + let accountId = false; + if (Array.isArray(apiKey)) { + [apiKey, accountId] = apiKey; + } + describe(`LLMInterfaceSendMessage("${module}")`, () => { test(`API Key should be set`, () => { expect(typeof apiKey).toBe('string'); }); + if (accountId) { + test(`Account ID should be set`, () => { + expect(typeof accountId).toBe('string'); + }); + } + test(`API Client should send a message and receive a response`, async () => { const response = await LLMInterfaceSendMessage( module, - apiKey, + !accountId ? apiKey : [apiKey, accountId], simplePrompt, options, ); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }, 30000); }); } else { diff --git a/test/simple/ai21.test.js b/test/simple/ai21.test.js index 1c69ed1..ff1ace8 100644 --- a/test/simple/ai21.test.js +++ b/test/simple/ai21.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('AI21 Simple', () => { if (ai21ApiKey) { @@ -22,10 +22,10 @@ describe('AI21 Simple', () => { const ai21 = new AI21(ai21ApiKey); response = await ai21.sendMessage(simplePrompt, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/simple/anthropic.test.js b/test/simple/anthropic.test.js index d6b7ee0..c4338da 100644 --- a/test/simple/anthropic.test.js +++ b/test/simple/anthropic.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('Anthropic Simple', () => { if (anthropicApiKey) { let response; @@ -22,13 +22,13 @@ describe('Anthropic Simple', () => { try { response = await anthropic.sendMessage(simplePrompt, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); } catch (error) { throw new Error(`Test failed: ${error}`); } }, 30000); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/simple/cohere.test.js b/test/simple/cohere.test.js index fa0550d..74bac73 100644 --- a/test/simple/cohere.test.js +++ b/test/simple/cohere.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('Cohere Simple', () => { if (cohereApiKey) { let response; @@ -22,11 +22,11 @@ describe('Cohere Simple', () => { response = await cohere.sendMessage(simplePrompt, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }, 30000); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/simple/gemini.test.js b/test/simple/gemini.test.js index 1cba6ab..c3ee0c2 100644 --- a/test/simple/gemini.test.js +++ b/test/simple/gemini.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('Gemini Simple', () => { if (geminiApiKey) { let response; @@ -22,10 +22,10 @@ describe('Gemini Simple', () => { response = await gemini.sendMessage(simplePrompt, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/simple/goose.test.js b/test/simple/gooseai.test.js similarity index 55% rename from test/simple/goose.test.js rename to test/simple/gooseai.test.js index 96dfb29..022983e 100644 --- a/test/simple/goose.test.js +++ b/test/simple/gooseai.test.js @@ -3,29 +3,29 @@ * @description Simplified tests for the Goose AI API client. */ -const Goose = require('../../src/interfaces/goose.js'); -const { gooseApiKey } = require('../../src/config/config.js'); +const GooseAI = require('../../src/interfaces/gooseai.js'); +const { gooseaiApiKey } = require('../../src/config/config.js'); const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); -describe('Goose Simple', () => { - if (gooseApiKey) { +} = require('../../src/utils/defaults.js'); +describe('GooseAI Simple', () => { + if (gooseaiApiKey) { let response; test('API Key should be set', async () => { - expect(typeof gooseApiKey).toBe('string'); + expect(typeof gooseaiApiKey).toBe('string'); }); test('API Client should send a message and receive a response', async () => { - const goose = new Goose(gooseApiKey); + const goose = new GooseAI(gooseaiApiKey); response = await goose.sendMessage(simplePrompt, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }, 30000); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/simple/groq.test.js b/test/simple/groq.test.js index 77b920e..3e63148 100644 --- a/test/simple/groq.test.js +++ b/test/simple/groq.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('Groq Simple', () => { if (groqApiKey) { let response; @@ -22,10 +22,10 @@ describe('Groq Simple', () => { response = await groq.sendMessage(simplePrompt, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/simple/huggingface.test.js b/test/simple/huggingface.test.js index 67fa7aa..0ec4be9 100644 --- a/test/simple/huggingface.test.js +++ b/test/simple/huggingface.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('HuggingFace Simple', () => { if (huggingfaceApiKey) { let response; @@ -23,14 +23,14 @@ describe('HuggingFace Simple', () => { try { response = await huggingface.sendMessage(simplePrompt, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); } catch (error) { console.error('Test failed:', error); throw error; } }, 30000); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/simple/llamacpp.test.js b/test/simple/llamacpp.test.js index f8cc2b6..44474eb 100644 --- a/test/simple/llamacpp.test.js +++ b/test/simple/llamacpp.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); const axios = require('axios'); describe('LlamaCPP Simple', () => { if (llamaURL) { @@ -41,11 +41,11 @@ describe('LlamaCPP Simple', () => { response = await llamacpp.sendMessage(simplePrompt, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }, 30000); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/simple/mistral.test.js b/test/simple/mistral.test.js deleted file mode 100644 index 91a5538..0000000 --- a/test/simple/mistral.test.js +++ /dev/null @@ -1,37 +0,0 @@ -/** - * @file mistral.test.js - * @description Simplified tests for the Mistral API client. - */ - -const Mistral = require('../../src/interfaces/mistral.js'); -const { mistralApiKey } = require('../../src/config/config.js'); -const { - simplePrompt, - options, - expectedMaxLength, -} = require('../utils/defaults.js'); -describe('Mistral Simple', () => { - if (mistralApiKey) { - let response; - test('API Key should be set', async () => { - expect(typeof mistralApiKey).toBe('string'); - }); - - test('API Client should send a message and receive a response', async () => { - const mistral = new Mistral(mistralApiKey); - - try { - response = await mistral.sendMessage(simplePrompt, options); - - expect(typeof response).toBe('string'); - } catch (error) { - throw new Error(`Test failed: ${error}`); - } - }, 30000); - test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); - }); - } else { - test.skip(`API Key is not set`, () => {}); - } -}); diff --git a/test/simple/mistralai.test.js b/test/simple/mistralai.test.js new file mode 100644 index 0000000..de71ffa --- /dev/null +++ b/test/simple/mistralai.test.js @@ -0,0 +1,37 @@ +/** + * @file mistralai.test.js + * @description Simplified tests for the MistralAI API client. + */ + +const MistralAI = require('../../src/interfaces/mistralai.js'); +const { mistralaiApiKey } = require('../../src/config/config.js'); +const { + simplePrompt, + options, + expectedMaxLength, +} = require('../../src/utils/defaults.js'); +describe('MistralAI Simple', () => { + if (mistralaiApiKey) { + let response; + test('API Key should be set', async () => { + expect(typeof mistralaiApiKey).toBe('string'); + }); + + test('API Client should send a message and receive a response', async () => { + const mistralai = new MistralAI(mistralaiApiKey); + + try { + response = await mistralai.sendMessage(simplePrompt, options); + + expect(typeof response).toStrictEqual('object'); + } catch (error) { + throw new Error(`Test failed: ${error}`); + } + }, 30000); + test(`Response should be less than ${expectedMaxLength} characters`, async () => { + expect(response.results.length).toBeLessThan(expectedMaxLength); + }); + } else { + test.skip(`API Key is not set`, () => {}); + } +}); diff --git a/test/simple/openai.test.js b/test/simple/openai.test.js index cd1c8fd..d6a124c 100644 --- a/test/simple/openai.test.js +++ b/test/simple/openai.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('OpenAI Simple', () => { if (openaiApiKey) { let response; @@ -21,10 +21,10 @@ describe('OpenAI Simple', () => { const openai = new OpenAI(openaiApiKey); response = await openai.sendMessage(simplePrompt, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/simple/perplexity.test.js b/test/simple/perplexity.test.js index 250dc87..a8434fa 100644 --- a/test/simple/perplexity.test.js +++ b/test/simple/perplexity.test.js @@ -9,7 +9,7 @@ const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); +} = require('../../src/utils/defaults.js'); describe('Perplexity Simple', () => { if (perplexityApiKey) { let response; @@ -22,10 +22,10 @@ describe('Perplexity Simple', () => { response = await perplixity.sendMessage(simplePrompt, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); }); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/simple/reka.test.js b/test/simple/rekaai.test.js similarity index 59% rename from test/simple/reka.test.js rename to test/simple/rekaai.test.js index 64eb7e6..371db16 100644 --- a/test/simple/reka.test.js +++ b/test/simple/rekaai.test.js @@ -3,35 +3,35 @@ * @description Simplified test for the Reka AI API client. */ -const Reka = require('../../src/interfaces/reka.js'); -const { rekaApiKey } = require('../../src/config/config.js'); +const RekaAI = require('../../src/interfaces/rekaai.js'); +const { rekaaiApiKey } = require('../../src/config/config.js'); const { simplePrompt, options, expectedMaxLength, -} = require('../utils/defaults.js'); -describe('Reka Simple', () => { - if (rekaApiKey) { +} = require('../../src/utils/defaults.js'); +describe('RekaAI Simple', () => { + if (rekaaiApiKey) { let response; test('API Key should be set', async () => { - expect(typeof rekaApiKey).toBe('string'); + expect(typeof rekaaiApiKey).toBe('string'); }); test('Client should send a message and receive a response', async () => { - const reka = new Reka(rekaApiKey); + const reka = new RekaAI(rekaaiApiKey); try { response = await reka.sendMessage(simplePrompt, options); - expect(typeof response).toBe('string'); + expect(typeof response).toStrictEqual('object'); } catch (error) { console.error('Test failed:', error); throw error; } }, 30000); test(`Response should be less than ${expectedMaxLength} characters`, async () => { - expect(response.length).toBeLessThan(expectedMaxLength); + expect(response.results.length).toBeLessThan(expectedMaxLength); }); } else { test.skip(`API Key is not set`, () => {}); diff --git a/test/utils/utils.test.js b/test/utils/utils.test.js new file mode 100644 index 0000000..1698c28 --- /dev/null +++ b/test/utils/utils.test.js @@ -0,0 +1,90 @@ +const { + returnMessageObject, + returnSimpleMessageObject, + returnModelByAlias, + parseJSON, +} = require('../../src/utils/utils'); +const config = require('../../src/config/llmProviders.json'); + +describe('Utils', () => { + describe('returnMessageObject', () => { + test('should return a message object with user and system messages', () => { + const message = 'Hello!'; + const systemMessage = 'This is a system message.'; + const expected = { + messages: [ + { role: 'system', content: systemMessage }, + { role: 'user', content: message }, + ], + }; + expect(returnMessageObject(message, systemMessage)).toEqual(expected); + }); + + test('should return a message object with a default system message', () => { + const message = 'Hello!'; + const expected = { + messages: [ + { role: 'system', content: 'You are a helpful assistant.' }, + { role: 'user', content: message }, + ], + }; + expect(returnMessageObject(message)).toEqual(expected); + }); + }); + + describe('returnSimpleMessageObject', () => { + test('should return a simple message object with the user message', () => { + const message = 'Hello!'; + const expected = { + messages: [{ role: 'user', content: message }], + }; + expect(returnSimpleMessageObject(message)).toEqual(expected); + }); + }); + + describe('returnModelByAlias', () => { + test('should return the model name based on the provided alias', () => { + const provider = 'openai'; + const modelAlias = 'default'; + const expectedModelName = config[provider].model[modelAlias].name; + expect(returnModelByAlias(provider, modelAlias)).toEqual( + expectedModelName, + ); + }); + + test('should return the model alias if the model name is not found', () => { + const provider = 'openai'; + const modelAlias = 'nonexistent-model'; + expect(returnModelByAlias(provider, modelAlias)).toEqual(modelAlias); + }); + + test('should return the model alias if the provider is not found', () => { + const provider = 'nonexistent-provider'; + const modelAlias = 'gpt-3'; + expect(returnModelByAlias(provider, modelAlias)).toEqual(modelAlias); + }); + }); + + describe('parseJSON', () => { + test('should parse JSON string correctly', async () => { + const jsonString = '{"name": "John"}'; + const expected = { name: 'John' }; + await expect(parseJSON(jsonString, false)).resolves.toStrictEqual( + expected, + ); + }); + + test('should repair and parse invalid JSON string if attemptRepair is true', async () => { + const jsonString = "{name: 'John'}"; + const expected = { name: 'John' }; + await expect(parseJSON(jsonString, true)).resolves.toStrictEqual( + expected, + ); + }); + + test('should return null for invalid JSON string if attemptRepair is false', async () => { + const jsonString = '{name'; + await expect(parseJSON(jsonString, false)).resolves.toBeNull(); + }); + }); +});