diff --git a/README.md b/README.md index 06e12c6..692f5ed 100644 --- a/README.md +++ b/README.md @@ -19,7 +19,9 @@ ephemeral conversation storage. ## Getting Started ### Prerequisites for Google Gemini API -- Get a [Google Gemini AI API key here](https://makersuite.google.com/app/apikey) and add it to Secrets in your Repl with the key GOOGLE_API_KEY. + +- Get a [Google Gemini AI API key here](https://makersuite.google.com/app/apikey) and add it to Secrets in your Repl + with the key GOOGLE_API_KEY. ### Prerequisites for langchain-gemini-api @@ -68,7 +70,9 @@ ephemeral conversation storage. cd langchain-gemini-api pip install -r requirements.txt ``` + ## Configuration + 1. Create a `.env` file in the project directory and add the following: ```bash GEMINI_API_KEY= # e.g. "1234567890" @@ -89,28 +93,28 @@ The API will be available at http://localhost:8000. ## API Endpoints -- **/conversations/**: Endpoint for text-based conversations. -- **/vision-conversations/**: Endpoint for image-based conversations. +- **/conversations/**: Endpoint for text and image based conversations. - **/delete/**: Endpoint to delete conversation data. ## API Documentation + - **/docs**: [Swagger UI for API documentation](http://127.0.0.1:8000/docs). - **/redoc**: [ReDoc UI for API documentation](http://127.0.0.1:8000/redoc). ## Acknowledgments + - Google Gemini API team - [Langchain contributors](https://python.langchain.com/docs/integrations/chat/google_generative_ai) - FastAPI community ## Demo -### Image-Based Conversation - -- Coming soon - -### Text-Based Conversation +### Conversation +#### With telegram bot use this one: https://t.me/rim_nova_bot +#### This is the telegram bot repo: [Gemini-Telegram-Bot](https://github.com/shamspias/gemini-telegram-bot) -- Coming soon +![Conversation](/media/images/conversation_1.png) +![Conversation](/media/images/conversation_2.png) ### Video diff --git a/app/utils/message_handler.py b/app/utils/message_handler.py index f264069..99c73d4 100644 --- a/app/utils/message_handler.py +++ b/app/utils/message_handler.py @@ -30,6 +30,10 @@ async def send_message_async( raise async def flush_conversation_cache(self, project_id: str): + # LLM configuration + llm_manager = GeminiLLMManager() + history = llm_manager.create_or_get_memory(project_id) + history.clear() await self.cache_manager.flush_conversation_cache(project_id) async def save_conversation_config(self, project_id: str, config: Dict): diff --git a/media/images/conversation_1.png b/media/images/conversation_1.png new file mode 100644 index 0000000..4c1000d Binary files /dev/null and b/media/images/conversation_1.png differ diff --git a/media/images/conversation_2.png b/media/images/conversation_2.png new file mode 100644 index 0000000..4302b54 Binary files /dev/null and b/media/images/conversation_2.png differ diff --git a/test/endpoints/delete.http b/test/endpoints/delete.http index a2d81a9..6c12c28 100644 --- a/test/endpoints/delete.http +++ b/test/endpoints/delete.http @@ -1,11 +1,4 @@ # Test your FastAPI endpoints -GET http://127.0.0.1:8000/ -Accept: application/json - -### - -GET http://127.0.0.1:8000/hello/User -Accept: application/json - -### +GET http://127.0.0.1:8000/api/v1/delete/910739946 +Accept: application/json \ No newline at end of file