Skip to content

Latest commit

 

History

History
74 lines (48 loc) · 1.75 KB

build-your-own-copilot.md

File metadata and controls

74 lines (48 loc) · 1.75 KB

Build your own Copilot

This tutorial helps you to build your own local Copilot with CodeLlama model.

Prepare the environment

Install CUDA

First, make sure you've installed the NVIDIA driver and CUDA Toolkit according to the Prepare the CUDA environment in AWS G5 instances undert Ubuntu 24.04 article.

Install Ollama

See how to Setup Ollama.

Install models

ollama run codellama:7b-instruct
ollama run codellama:7b-code

You may noticed that we installed two models, codellama:7b-instruct is for auto-complete, and codellama:7b-code is for chat. We will tell you how to use them later.

Install VSCode

sudo apt install code

Or you want to download and install manually.

Install Twinny

See this picture that how to install Twinny:

Twinny

Configure Twinny

See the picture how to configure Twinny:

Twinny Config

Auto-complete
Hostname: localhost
Port: 11434
Path: /api/generate
Model Name: codellama:7b-code
FIM Template: codellama

Chat
Hostname: localhost
Port: 11434
Path: /v1/chat/completions
Model Name: codellama:7b-instruct

Test Twinny

Code complete:

Twinny code-complete

Chat:

Twinny Chat