-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Ollama LLM model pulling #988
Comments
This would be a sick feature. |
I will try implementing it recently. |
Noticed that there is a tag along with the model name, which is kinda similar to $ ollama list
NAME ID SIZE MODIFIED
moondream:latest 55fc3abd3867 1.7 GB 5 minutes ago |
@SteveLauC If you run the |
Thanks for the explanation, I will pull the model with a tag included so that Topgrade won't do anything that users do not want to do. |
Implemented in #1001, anyone wants to give it a test? I can provide Linux/x86_64 and macOS/arm64 binaries. For the macOS build, I am not sure if it can be executed on other Macs, it seems that the binary needs to be signed. |
Here is what it looks like on my machine: $ ./target/debug/topgrade --only ollama
── 18:51:48 - Sudo ─────────────────────────────────────────────────────────────
── 18:51:48 - Ollama ───────────────────────────────────────────────────────────
Pulling model 'gemma2:2b'
pulling manifest
pulling 7462734796d6... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 1.6 GB
pulling e0a42594d802... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 358 B
pulling 097a36493f71... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 8.4 KB
pulling 2490e7468436... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 65 B
pulling e18ad7af7efb... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 487 B
verifying sha256 digest
writing manifest
success
Pulling model 'moondream:latest'
pulling manifest
pulling e554c6b9de01... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 828 MB
pulling 4cc1cb3660d8... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 909 MB
pulling c71d239df917... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 11 KB
pulling 4b021a3b4b4a... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 77 B
pulling 9468773bdc1f... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 65 B
pulling ba5fbb481ada... 100% ▕██████████████████████████████████████████████████████████████████████████████████████▏ 562 B
verifying sha256 digest
writing manifest
success
── 18:51:51 - Summary ──────────────────────────────────────────────────────────
Ollama: OK |
Looks great :) Whats the quickest way for me to test? I can test the Mac ARM64 side. 😄 Should I just checkout your branch and build myself? |
If you have $ git clone https://github.com/SteveLauC/topgrade.git
$ cd topgrade
$ cargo build
$ ./target/debug/topgrade --only ollama If not, I can build it for you and upload it here: https://github.com/SteveLauC/topgrade/releases |
My local environment is currently not set up for Rust development, so I would be happy to have a provided binary from your side. ;) We will see if it works out with the signing problem. ;) |
Thanks for sharing. Everything works like a charm with the binary you provided if only using pulled models. 😄 I get the same output as you shared above. However we have an issue, when models are built locally. Just like with Docker and The problem now is, that the command Unfortunately However I am not 100% sure if the effort is justified. Most users most likely only use downloaded models. So this might be kind of an edge case I described here. We still need to make sure models are further pulled down after a "pull error" occured. I don't know if topgrade stops the executing immediatly after an error occurred or not. In my case the failing model pull was executed last, so I couldn't reproduce this edge case. |
I want to suggest a new step
Which tool is this about? Where is its repository?
I want topgrade to support
ollama
, a tool for managing LLMs locally, see https://github.com/ollama/ollama.Topgrade should invoke model pulling to update all locally installed models.
Which operating systems are supported by this tool?
Windows, macOS, Linux
What should Topgrade do to figure out if the tool needs to be invoked?
if
ollama
is found in path it should be invoked, this also has to work whenollama
was installed through package managers like homebrew.Which exact commands should Topgrade run?
I am currently running through a custom command with
ollama list | awk 'NR>1 && !/reviewer/ {system(\"ollama pull \"$1)}'
This allows to pull all new model files for currently installed models.
I am not sure if this command works OS independent.
Does it have a
--dry-run
option? i.e., print what should be done and exitNo dry run option available
Does it need the user to confirm the execution? And does it provide a
--yes
option to skip this step?
No user confirmation needed
More information
To test this a model should be downloaded on the local machine first, e.g. via
ollama run qwen2.5-coder:0.5b
.When topgrade invokes
ollama
it should show an output similar toThe text was updated successfully, but these errors were encountered: