Inference Llama 2 in one file of pure C
-
Updated
Jul 28, 2023 - Python
Inference Llama 2 in one file of pure C
A Fine Tuned LLama-2 7B
provide specialized immigration assistance in the field of immigration law using large language model
This project focuses on fine-tuning the powerful Llama2 language model and deploying it on AWS SageMaker
A repository for groups C (Relation Extraction) and D (Concept Linking)
🤖 DataSciencePilot 🚀 is an innovative chat-based interface designed to interact with custom PDF files. It leverages the power of Pinecone for efficient vector database management and LLaMA-2 for advanced query response capabilities.
Article generator made using Python, Llama2, LangChain, Streamlit, and Pexels API.
C++ Implementation of Meta's LLaMA v2 Engine. Credited to ggerganov/llama.cpp
The Llama-2-GGML-CSV-Chatbot is a conversational tool leveraging the powerful Llama-2 7B language model. It facilitates multi-turn interactions based on uploaded CSV data, allowing users to engage in seamless conversations.
AI-powered chatbot using Llama2 and Chainlit, offering comprehensive knowledge on Buddhism from an extensive encyclopedia.
Llama.cpp é uma biblioteca desenvolvida em C++ para a implementação eficiente de grandes modelos de linguagem, como o LLaMA da Meta. Otimizada para rodar em diversas plataformas, incluindo dispositivos com recursos limitados, oferece performance, velocidade de inferência e uso eficiente da memória, essenciais para a execução de grandes. modelos
research done at Prof Hung's Lab at MBZUAI
A LLaMA2-7b chatbot with memory running on CPU, and optimized using smooth quantization, 4-bit quantization or Intel® Extension For PyTorch with bfloat16.
Add a description, image, and links to the llama2 topic page so that developers can more easily learn about it.
To associate your repository with the llama2 topic, visit your repo's landing page and select "manage topics."