Can gptel use the model loaded by GPT4All desktop by default? #123
-
In playing with the GPT4All capability, I am simply running the GPT4All desktop client with the API server enabled. When GPT4All starts it loads the default model as configured in GPT4All.
When I run
I often get something like I guess that my issue is mainly that gptel fails to load the model and thus can't get responses. So, can gptel use the model loaded by GPT4All desktop by default if they are the same or should I be looking for a different solution? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 5 replies
-
Sorry, I don't understand your description. Do you want gptel to dynamically load the model that is currently active in GPT4All? |
Beta Was this translation helpful? Give feedback.
-
Rather than having gptel load the model, I wonder if it can use the already loaded model.
When I launch GPT4All desktop, it loads whatever it's default model is and I don't have enough VRAM for gptel to load the same or a different model concurrently and I don't see how to make GPT4All Desktop avoid loading a model by default.
Ah, I see the confusion. gptel does not "load" anything -- it simply makes http requests using GPT4All's API. gptel is basically an http client/frontend for Curl. There is only one model loaded into memory, and it is handled by GPT4All.
|
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
Using GPT4All's REST API takes more memory (according to the description when you hover over the button):
Can you try if this happens with a smaller model too?