diff --git a/README.md b/README.md index 2b1036e1..5fbf649c 100644 --- a/README.md +++ b/README.md @@ -10,6 +10,8 @@ Slash Your LLM API Costs by 10x 💰, Boost Speed by 100x ⚡ This project is undergoing swift development, and as such, the API may be subject to change at any time. +If you're looking for the **most recent** update on GPTCache, please refer to: [release note](https://github.com/zilliztech/GPTCache/blob/main/docs/release_note.md) + ## Quick Install `pip install gptcache` diff --git a/docs/release_note.md b/docs/release_note.md index 75e2bd2c..e3b628e7 100644 --- a/docs/release_note.md +++ b/docs/release_note.md @@ -5,6 +5,37 @@ To read the following content, you need to understand the basic use of GPTCache, - [Readme doc](https://github.com/zilliztech/GPTCache) - [Usage doc](https://github.com/zilliztech/GPTCache/blob/main/docs/usage.md) +## v0.1.13 (2023.4.16) + +1. Add openai audio adapter (**experimental**) + +```python +cache.init(pre_embedding_func=get_file_bytes) + +openai.Audio.transcribe( + model="whisper-1", + file=audio_file +) +``` + +2. Improve data eviction implementation + +In the future, users will have greater flexibility to customize eviction methods, such as by using Redis or Memcached. Currently, the default caching library is cachetools, which provides an in-memory cache. Other libraries are not currently supported, but may be added in the future. + +## v0.1.12 (2023.4.15) + +1. The llm request can customize topk search parameters + +```python +openai.ChatCompletion.create( + model="gpt-3.5-turbo", + messages=[ + {"role": "user", "content": question}, + ], + top_k=10, +) +``` + ## v0.1.11 (2023.4.14) 1. Add openai complete adapter diff --git a/setup.py b/setup.py index f133c6da..7095843c 100644 --- a/setup.py +++ b/setup.py @@ -17,7 +17,7 @@ def parse_requirements(file_name: str) -> List[str]: setuptools.setup( name="gptcache", packages=find_packages(), - version="0.1.12", + version="0.1.13", author="SimFG", author_email="bang.fu@zilliz.com", description="GPTCache, a powerful caching library that can be used to speed up and lower the cost of chat "