Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ollama: unvendor llama.cpp #323672

Open
SomeoneSerge opened this issue Jun 30, 2024 · 0 comments
Open

ollama: unvendor llama.cpp #323672

SomeoneSerge opened this issue Jun 30, 2024 · 0 comments

Comments

@SomeoneSerge
Copy link
Contributor

SomeoneSerge commented Jun 30, 2024

Issue description

Ollama vendors a copy of llama.cpp as a git submodule: https://github.com/ollama/ollama/tree/main/llm. Llama-cpp is built as part of the buildGoModule, apparently during the go generate. This significantly increases build times when iterating on the ollama derivation (because you have to rebuild the submodule each time you change the ambient ollama package). This also makes it much harder to configure (patch, tweak) llama.cpp used by ollama. Overall, I think we want less vendoring in Nixpkgs, we want fewer "build inside a build" situations.

It is desirable to refactor the ollama expression so as to reuse the prebuilt llama-cpp from Nixpkgs

EDIT: A detailed overview of the current situation is provided by @abysssol at #323249 (comment)

CC @abysssol @dit7ya @elohmeier @RoyDubnium

Historical context

This is a follow-up to #323249 and https://github.com/NixOS/nixpkgs/pull/323056/files#diff-99616cb74ed708e718f689a9fc8f62999fdff6164590233bbb1a81a0f8f8c08b, aiming to make Nixpkgs' ollama nicer and easier to maintain

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant