Skip to content

Commit

Permalink
modify readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Xu Liu authored and Xu Liu committed Mar 31, 2024
1 parent d1d35e0 commit 2c150a0
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
## 使用方法
- 根据[Chatglm.cpp](chatglm.cpp.md)安装Chatglm.cpp
- 下载ChatGLM3 6B-4bit模型 [model](https://huggingface.co/Xorbits/chatglm3-6B-GGML)
- 安装whisper.cpp(https://github.com/ggerganov/whisper.cpp)尽量选择BLAS编译,加速推理速度
- 安装whisper.cpp(https://github.com/ggerganov/whisper.cpp) 尽量选择BLAS编译,加速推理速度
- 安装相关依赖
`pip install -r requirements.txt`
- 修改模型存储路径并运行脚本
Expand Down
2 changes: 1 addition & 1 deletion README_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Tested on M2 16G MacBook Air / (4070Ti) + 13600KF Ubuntu / (4070Ti) + 13600KF Wi
## Usage
1. Follow the README of [Chatglm.cpp](chatglm.cpp.md) to install chatglm.cpp
2. Download ChatGLM3 6B-4bit [model](https://huggingface.co/Xorbits/chatglm3-6B-GGML)
3. Install whisper.cpp(https://github.com/ggerganov/whisper.cpp)compile with BLAS / CUBLAS can speed up the inference process
3. Install [whisper.cpp](https://github.com/ggerganov/whisper.cpp) and compile with BLAS / CUBLAS can speed up the inference process
4. Install requirements
`pip install -r requirements.txt`
5. Modify the model path of the script and run it:
Expand Down

0 comments on commit 2c150a0

Please sign in to comment.