gptel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends. It works in the spirit of Emacs, available at any time and uniformly in any buffer.
LLM Backend | Supports | Requires |
---|---|---|
ChatGPT | ✓ | API key |
Anthropic (Claude) | ✓ | API key |
Gemini | ✓ | API key |
Ollama | ✓ | Ollama running locally |
Llama.cpp | ✓ | Llama.cpp running locally |
Llamafile | ✓ | Local Llamafile server |
GPT4All | ✓ | GPT4All running locally |
Kagi FastGPT | ✓ | API key |
Kagi Summarizer | ✓ | API key |
Azure | ✓ | Deployment and API key |
Groq | ✓ | API key |
Perplexity | ✓ | API key |
OpenRouter | ✓ | API key |
together.ai | ✓ | API key |
Anyscale | ✓ | API key |
PrivateGPT | ✓ | PrivateGPT running locally |
DeepSeek | ✓ | API key |
Cerebras | ✓ | API key |
Github Models | ✓ | Token |
Novita AI | ✓ | Token |
xAI | ✓ | API key |
General usage: (YouTube Demo)
intro-demo.mp4
intro-demo-2.mp4
In-place usage
gptel-rewrite-demo-1.mp4
Media support
gptel-image-demo-1.mp4
Multi-LLM support demo:
gptel-multi.mp4
- It’s async and fast, streams responses.
- Interact with LLMs from anywhere in Emacs (any buffer, shell, minibuffer, wherever)
- LLM responses are in Markdown or Org markup.
- Supports multiple independent conversations and one-off ad hoc interactions.
- Supports multi-modal models (include images, documents)
- Save chats as regular Markdown/Org/Text files and resume them later.
- You can go back and edit your previous prompts or LLM responses when continuing a conversation. These will be fed back to the model.
- Don’t like gptel’s workflow? Use it to create your own for any supported model/backend with a simple API.
gptel uses Curl if available, but falls back to url-retrieve to work without external dependencies.
- Breaking changes!
- Installation
- Setup
- Usage
- FAQ
- I want to use gptel in a way that’s not supported by
gptel-send
or the options menu - I want the window to scroll automatically as the response is inserted
- I want the cursor to move to the next prompt after the response is inserted
- I want to change the formatting of the prompt and LLM response
- I want the transient menu options to be saved so I only need to set them once
- Can I change the transient menu key bindings?
- How does gptel distinguish between user prompts and LLM responses?
- (Doom Emacs) Sending a query from the gptel menu fails because of a key conflict with Org mode
- (ChatGPT) I get the error “(HTTP/2 429) You exceeded your current quota”
- Why another LLM client?
- I want to use gptel in a way that’s not supported by
- Additional Configuration
- Alternatives
- Acknowledgments
gptel-model
is now expected to be a symbol, not a string. Please update your configuration.
gptel can be installed in Emacs out of the box with M-x package-install
⏎ gptel
. This installs the latest commit.
If you want the stable version instead, add NonGNU-devel ELPA or MELPA-stable to your list of package sources (package-archives
), then install gptel with M-x package-install⏎
gptel
from these sources.
(Optional: Install markdown-mode
.)
Clone or download this repository and run M-x package-install-file⏎
on the repository directory.
Installing the markdown-mode
package is optional.
In packages.el
(package! gptel)
In config.el
(use-package! gptel
:config
(setq! gptel-api-key "your key"))
“your key” can be the API key itself, or (safer) a function that returns the key. Setting gptel-api-key
is optional, you will be asked for a key if it’s not found.
In your .spacemacs
file, add llm-client
to dotspacemacs-configuration-layers
.
(llm-client :variables
llm-client-enable-gptel t)
Procure an OpenAI API key.
Optional: Set gptel-api-key
to the key. Alternatively, you may choose a more secure method such as:
- Storing in
~/.authinfo
. By default, “api.openai.com” is used as HOST and “apikey” as USER.machine api.openai.com login apikey password TOKEN
- Setting it to a function that returns the key.
Register a backend with
(gptel-make-azure "Azure-1" ;Name, whatever you'd like
:protocol "https" ;Optional -- https is the default
:host "YOUR_RESOURCE_NAME.openai.azure.com"
:endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" ;or equivalent
:stream t ;Enable streaming responses
:key #'gptel-api-key
:models '(gpt-3.5-turbo gpt-4))
Refer to the documentation of gptel-make-azure
to set more parameters.
You can pick this backend from the menu when using gptel. (see Usage).
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq
gptel-model 'gpt-3.5-turbo
gptel-backend (gptel-make-azure "Azure-1"
:protocol "https"
:host "YOUR_RESOURCE_NAME.openai.azure.com"
:endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15"
:stream t
:key #'gptel-api-key
:models '(gpt-3.5-turbo gpt-4)))
Register a backend with
(gptel-make-gpt4all "GPT4All" ;Name of your choosing
:protocol "http"
:host "localhost:4891" ;Where it's running
:models '(mistral-7b-openorca.Q4_0.gguf)) ;Available models
These are the required parameters, refer to the documentation of gptel-make-gpt4all
for more.
You can pick this backend from the menu when using gptel (see Usage).
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above. Additionally you may want to increase the response token size since GPT4All uses very short (often truncated) responses by default.
;; OPTIONAL configuration
(setq
gptel-max-tokens 500
gptel-model 'mistral-7b-openorca.Q4_0.gguf
gptel-backend (gptel-make-gpt4all "GPT4All"
:protocol "http"
:host "localhost:4891"
:models '(mistral-7b-openorca.Q4_0.gguf)))
Register a backend with
(gptel-make-ollama "Ollama" ;Any name of your choosing
:host "localhost:11434" ;Where it's running
:stream t ;Stream responses
:models '(mistral:latest)) ;List of models
These are the required parameters, refer to the documentation of gptel-make-ollama
for more.
You can pick this backend from the menu when using gptel (see Usage)
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq
gptel-model 'mistral:latest
gptel-backend (gptel-make-ollama "Ollama"
:host "localhost:11434"
:stream t
:models '(mistral:latest)))
Register a backend with
;; :key can be a function that returns the API key.
(gptel-make-gemini "Gemini" :key "YOUR_GEMINI_API_KEY" :stream t)
These are the required parameters, refer to the documentation of gptel-make-gemini
for more.
You can pick this backend from the menu when using gptel (see Usage)
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq
gptel-model 'gemini-pro
gptel-backend (gptel-make-gemini "Gemini"
:key "YOUR_GEMINI_API_KEY"
:stream t))
(If using a llamafile, run a server llamafile instead of a “command-line llamafile”, and a model that supports text generation.)
Register a backend with
;; Llama.cpp offers an OpenAI compatible API
(gptel-make-openai "llama-cpp" ;Any name
:stream t ;Stream responses
:protocol "http"
:host "localhost:8000" ;Llama.cpp server location
:models '(test)) ;Any names, doesn't matter for Llama
These are the required parameters, refer to the documentation of gptel-make-openai
for more.
You can pick this backend from the menu when using gptel (see Usage)
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq
gptel-model 'test
gptel-backend (gptel-make-openai "llama-cpp"
:stream t
:protocol "http"
:host "localhost:8000"
:models '(test)))
Kagi’s FastGPT model and the Universal Summarizer are both supported. A couple of notes:
- Universal Summarizer: If there is a URL at point, the summarizer will summarize the contents of the URL. Otherwise the context sent to the model is the same as always: the buffer text upto point, or the contents of the region if the region is active.
- Kagi models do not support multi-turn conversations, interactions are “one-shot”. They also do not support streaming responses.
Register a backend with
(gptel-make-kagi "Kagi" ;any name
:key "YOUR_KAGI_API_KEY") ;can be a function that returns the key
These are the required parameters, refer to the documentation of gptel-make-kagi
for more.
You can pick this backend and the model (fastgpt/summarizer) from the transient menu when using gptel.
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq
gptel-model 'fastgpt
gptel-backend (gptel-make-kagi "Kagi"
:key "YOUR_KAGI_API_KEY"))
The alternatives to fastgpt
include summarize:cecil
, summarize:agnes
, summarize:daphne
and summarize:muriel
. The difference between the summarizer engines is documented here.
Register a backend with
;; Together.ai offers an OpenAI compatible API
(gptel-make-openai "TogetherAI" ;Any name you want
:host "api.together.xyz"
:key "your-api-key" ;can be a function that returns the key
:stream t
:models '(;; has many more, check together.ai
mistralai/Mixtral-8x7B-Instruct-v0.1
codellama/CodeLlama-13b-Instruct-hf
codellama/CodeLlama-34b-Instruct-hf))
You can pick this backend from the menu when using gptel (see Usage)
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq
gptel-model 'mistralai/Mixtral-8x7B-Instruct-v0.1
gptel-backend
(gptel-make-openai "TogetherAI"
:host "api.together.xyz"
:key "your-api-key"
:stream t
:models '(;; has many more, check together.ai
mistralai/Mixtral-8x7B-Instruct-v0.1
codellama/CodeLlama-13b-Instruct-hf
codellama/CodeLlama-34b-Instruct-hf)))
Register a backend with
;; Anyscale offers an OpenAI compatible API
(gptel-make-openai "Anyscale" ;Any name you want
:host "api.endpoints.anyscale.com"
:key "your-api-key" ;can be a function that returns the key
:models '(;; has many more, check anyscale
mistralai/Mixtral-8x7B-Instruct-v0.1))
You can pick this backend from the menu when using gptel (see Usage)
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq
gptel-model 'mistralai/Mixtral-8x7B-Instruct-v0.1
gptel-backend
(gptel-make-openai "Anyscale"
:host "api.endpoints.anyscale.com"
:key "your-api-key"
:models '(;; has many more, check anyscale
mistralai/Mixtral-8x7B-Instruct-v0.1)))
Register a backend with
;; Perplexity offers an OpenAI compatible API
(gptel-make-openai "Perplexity" ;Any name you want
:host "api.perplexity.ai"
:key "your-api-key" ;can be a function that returns the key
:endpoint "/chat/completions"
:stream t
:models '(;; has many more, check perplexity.ai
pplx-7b-chat
pplx-70b-chat
pplx-7b-online
pplx-70b-online))
You can pick this backend from the menu when using gptel (see Usage)
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq
gptel-model 'pplx-7b-chat
gptel-backend
(gptel-make-openai "Perplexity"
:host "api.perplexity.ai"
:key "your-api-key"
:endpoint "/chat/completions"
:stream t
:models '(;; has many more, check perplexity.ai
pplx-7b-chat
pplx-70b-chat
pplx-7b-online
pplx-70b-online)))
Register a backend with
(gptel-make-anthropic "Claude" ;Any name you want
:stream t ;Streaming responses
:key "your-api-key")
The :key
can be a function that returns the key (more secure).
You can pick this backend from the menu when using gptel (see Usage).
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq
gptel-model 'claude-3-sonnet-20240229 ; "claude-3-opus-20240229" also available
gptel-backend (gptel-make-anthropic "Claude"
:stream t :key "your-api-key"))
Register a backend with
;; Groq offers an OpenAI compatible API
(gptel-make-openai "Groq" ;Any name you want
:host "api.groq.com"
:endpoint "/openai/v1/chat/completions"
:stream t
:key "your-api-key" ;can be a function that returns the key
:models '(llama-3.1-70b-versatile
llama-3.1-8b-instant
llama3-70b-8192
llama3-8b-8192
mixtral-8x7b-32768
gemma-7b-it))
You can pick this backend from the menu when using gptel (see Usage). Note that Groq is fast enough that you could easily set :stream nil
and still get near-instant responses.
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq gptel-model 'mixtral-8x7b-32768
gptel-backend
(gptel-make-openai "Groq"
:host "api.groq.com"
:endpoint "/openai/v1/chat/completions"
:stream t
:key "your-api-key"
:models '(llama-3.1-70b-versatile
llama-3.1-8b-instant
llama3-70b-8192
llama3-8b-8192
mixtral-8x7b-32768
gemma-7b-it)))
Register a backend with
;; OpenRouter offers an OpenAI compatible API
(gptel-make-openai "OpenRouter" ;Any name you want
:host "openrouter.ai"
:endpoint "/api/v1/chat/completions"
:stream t
:key "your-api-key" ;can be a function that returns the key
:models '(openai/gpt-3.5-turbo
mistralai/mixtral-8x7b-instruct
meta-llama/codellama-34b-instruct
codellama/codellama-70b-instruct
google/palm-2-codechat-bison-32k
google/gemini-pro))
You can pick this backend from the menu when using gptel (see Usage).
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq gptel-model 'mixtral-8x7b-32768
gptel-backend
(gptel-make-openai "OpenRouter" ;Any name you want
:host "openrouter.ai"
:endpoint "/api/v1/chat/completions"
:stream t
:key "your-api-key" ;can be a function that returns the key
:models '(openai/gpt-3.5-turbo
mistralai/mixtral-8x7b-instruct
meta-llama/codellama-34b-instruct
codellama/codellama-70b-instruct
google/palm-2-codechat-bison-32k
google/gemini-pro)))
Register a backend with
(gptel-make-privategpt "privateGPT" ;Any name you want
:protocol "http"
:host "localhost:8001"
:stream t
:context t ;Use context provided by embeddings
:sources t ;Return information about source documents
:models '(private-gpt))
You can pick this backend from the menu when using gptel (see Usage).
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq gptel-model 'private-gpt
gptel-backend
(gptel-make-privategpt "privateGPT" ;Any name you want
:protocol "http"
:host "localhost:8001"
:stream t
:context t ;Use context provided by embeddings
:sources t ;Return information about source documents
:models '(private-gpt)))
Register a backend with
;; DeepSeek offers an OpenAI compatible API
(gptel-make-openai "DeepSeek" ;Any name you want
:host "api.deepseek.com"
:endpoint "/chat/completions"
:stream t
:key "your-api-key" ;can be a function that returns the key
:models '(deepseek-chat deepseek-coder))
You can pick this backend from the menu when using gptel (see Usage).
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq gptel-model 'deepseek-chat
gptel-backend
(gptel-make-openai "DeepSeek" ;Any name you want
:host "api.deepseek.com"
:endpoint "/chat/completions"
:stream t
:key "your-api-key" ;can be a function that returns the key
:models '(deepseek-chat deepseek-coder)))
Register a backend with
;; Cerebras offers an instant OpenAI compatible API
(gptel-make-openai "Cerebras"
:host "api.cerebras.ai"
:endpoint "/v1/chat/completions"
:stream t ;optionally nil as Cerebras is instant AI
:key "your-api-key" ;can be a function that returns the key
:models '(llama3.1-70b
llama3.1-8b))
You can pick this backend from the menu when using gptel (see Usage).
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq gptel-model 'llama3.1-8b
gptel-backend
(gptel-make-openai "Cerebras"
:host "api.cerebras.ai"
:endpoint "/v1/chat/completions"
:stream nil
:key "your-api-key"
:models '(llama3.1-70b
llama3.1-8b)))
Register a backend with
;; Github Models offers an OpenAI compatible API
(gptel-make-openai "Github Models" ;Any name you want
:host "models.inference.ai.azure.com"
:endpoint "/chat/completions"
:stream t
:key "your-github-token"
:models '(gpt-4o))
For all the available models, check the marketplace.
You can pick this backend from the menu when using (see Usage).
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq gptel-model 'gpt-4o
gptel-backend
(gptel-make-openai "Github Models" ;Any name you want
:host "models.inference.ai.azure.com"
:endpoint "/chat/completions"
:stream t
:key "your-github-token"
:models '(gpt-4o))
Register a backend with
;; Novita AI offers an OpenAI compatible API
(gptel-make-openai "NovitaAI" ;Any name you want
:host "api.novita.ai"
:endpoint "/v3/openai"
:key "your-api-key" ;can be a function that returns the key
:stream t
:models '(;; has many more, check https://novita.ai/llm-api
gryphe/mythomax-l2-13b
meta-llama/llama-3-70b-instruct
meta-llama/llama-3.1-70b-instruct))
You can pick this backend from the menu when using gptel (see Usage)
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq
gptel-model 'gryphe/mythomax-l2-13b
gptel-backend
(gptel-make-openai "NovitaAI"
:host "api.novita.ai"
:endpoint "/v3/openai"
:key "your-api-key"
:stream t
:models '(;; has many more, check https://novita.ai/llm-api
mistralai/Mixtral-8x7B-Instruct-v0.1
meta-llama/llama-3-70b-instruct
meta-llama/llama-3.1-70b-instruct)))
Register a backend with
;; xAI offers an OpenAI compatible API
(gptel-make-openai "xAI" ;Any name you want
:host "api.x.ai"
:key "your-api-key" ;can be a function that returns the key
:endpoint "/v1/chat/completions"
:stream t
:models '(;; xAI now only offers `grok-beta` as of the time of this writing
grok-beta))
You can pick this backend from the menu when using gptel (see Usage)
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of gptel-backend
. Use this instead of the above.
;; OPTIONAL configuration
(setq
gptel-model 'grok-beta
gptel-backend
(gptel-make-openai "xAI" ;Any name you want
:host "api.x.ai"
:key "your-api-key" ;can be a function that returns the key
:endpoint "/v1/chat/completions"
:stream t
:models '(;; xAI now only offers `grok-beta` as of the time of this writing
grok-beta)))
gptel provides a few powerful, general purpose and flexible commands. You can dynamically tweak their behavior to the needs of your task with directives, redirection options and more. There is a video demo showing various uses of gptel – but gptel-send
might be all you need.
To send queries | Description |
---|---|
gptel-send | Send all text up to (point) , or the selection if region is active. Works anywhere in Emacs. |
gptel | Create a new dedicated chat buffer. Not required to use gptel. |
gptel-rewrite | Rewrite, refactor or change the selected region. Can diff/ediff changes before merging/applying. |
To tweak behavior | |
C-u gptel-send | Transient menu for preferences, input/output redirection etc. |
gptel-menu | (Same) |
To add context | |
gptel-add | Add/remove a region or buffer to gptel’s context. In Dired, add/remove marked files. |
gptel-add-file | Add a file (text or supported media type) to gptel’s context. Also available from the transient menu. |
Org mode bonuses | |
gptel-org-set-topic | Limit conversation context to an Org heading. (For branching conversations see below.) |
gptel-org-set-properties | Write gptel configuration as Org properties, for per-heading chat configuration. |
- Call
M-x gptel-send
to send the text up to the cursor. The response will be inserted below. Continue the conversation by typing below the response. - If a region is selected, the conversation will be limited to its contents.
- Call
M-x gptel-send
with a prefix argument (C-u
)- to set chat parameters (GPT model, backend, system message etc) for this buffer,
- include quick instructions for the next request only,
- to add additional context – regions, buffers or files – to gptel,
- to read the prompt from or redirect the response elsewhere,
- or to replace the prompt with the response.
Note: gptel works anywhere in Emacs. The dedicated chat buffer only adds some conveniences.
- Run
M-x gptel
to start or switch to the chat buffer. It will ask you for the key if you skipped the previous step. Run it with a prefix-arg (C-u M-x gptel
) to start a new session. - In the gptel buffer, send your prompt with
M-x gptel-send
, bound toC-c RET
. - Set chat parameters (LLM provider, model, directives etc) for the session by calling
gptel-send
with a prefix argument (C-u C-c RET
):
That’s it. You can go back and edit previous prompts and responses if you want.
The default mode is markdown-mode
if available, else text-mode
. You can set gptel-default-mode
to org-mode
if desired.
gptel supports sending media in Markdown and Org chat buffers, but this feature is disabled by default.
- You can enable it globally, for all models that support it, by setting
gptel-track-media
. - Or you can set it locally, just for the chat buffer, via the header line:
There are two ways to include media with requests:
- Adding media files to the context with
gptel-add-file
, described further below. - Including links to media in chat buffers, described here:
To send media – images or other supported file types – with requests in chat buffers, you can include links to them in the chat buffer. Such a link must be “standalone”, i.e. on a line by itself surrounded by whitespace.
In Org mode, for example, the following are all valid ways of including an image with the request:
- “Standalone” file link:
Describe this picture [[file:/path/to/screenshot.png]] Focus specifically on the text content.
- “Standalone” file link with description:
Describe this picture [[file:/path/to/screenshot.png][some picture]] Focus specifically on the text content.
- “Standalone”, angle file link:
Describe this picture <file:/path/to/screenshot.png> Focus specifically on the text content.
The following links are not valid, and the text of the link will be sent instead of the file contents:
- Inline link:
Describe this [[file:/path/to/screenshot.png][picture]]. Focus specifically on the text content.
- Link not “standalone”:
Describe this picture: [[file:/path/to/screenshot.png]] Focus specifically on the text content.
- Not a valid Org link:
Describe the picture file:/path/to/screenshot.png
Similar criteria apply to Markdown chat buffers.
Saving the file will save the state of the conversation as well. To resume the chat, open the file and turn on gptel-mode
before editing the buffer.
Most gptel options can be set from gptel’s transient menu, available by calling gptel-send
with a prefix-argument, or via gptel-menu
. To change their default values in your configuration, see Additional Configuration. Chat buffer-specific options are also available via the header-line in chat buffers.
Selecting a model and backend can be done interactively via the -m
command of gptel-menu
. Available registered models are prefixed by the name of their backend with a string like ChatGPT:gpt-4o-mini
, where ChatGPT
is the backend name you used to register it and gpt-4o-mini
is the name of the model.
By default, gptel will query the LLM with the active region or the buffer contents up to the cursor. Often it can be helpful to provide the LLM with additional context from outside the current buffer. For example, when you’re in a chat buffer but want to ask questions about a (possibly changing) code buffer and auxiliary project files.
You can include additional text regions, buffers or files with gptel’s queries. This additional context is “live” and not a snapshot. Once added, the regions, buffers or files are scanned and included at the time of each query. When using multi-modal models, added files can be of any supported type – typically images.
You can add a selected region, buffer or file to gptel’s context from the menu, or call gptel-add
. (To add a file use gptel-add
in Dired or use the dedicated gptel-add-file
command.)
You can examine the active context from the menu:
And then browse through or remove context from the context buffer:
In any buffer: with a region selected, you can modify text, rewrite prose or refactor code with gptel-rewrite
. Example with prose:
gptel-rewrite-prose-demo-1.mp4
The result is previewed over the original text. By default, the buffer is not modified.
Pressing RET
or clicking in the rewritten region should give you a list of options: you can diff, ediff, merge or accept the replacement. Example with code:
gptel-rewrite-code-demo-1.mp4
Acting on the LLM response:
If you would like one of these things to happen automatically, you can customize gptel-rewrite-default-action
.
These options are also available from gptel-rewrite
:
And you can call them directly when the cursor is in the rewritten region:
gptel offers a few extra conveniences in Org mode.
You can limit the conversation context to an Org heading with the command gptel-org-set-topic
.
(This sets an Org property (GPTEL_TOPIC
) under the heading. You can also add this property manually instead.)
You can have branching conversations in Org mode, where each hierarchical outline path through the document is a separate conversation branch. This is also useful for limiting the context size of each query. See the variable gptel-org-branching-context
.
If this variable is non-nil, you should probably edit gptel-prompt-prefix-alist
and gptel-response-prefix-alist
so that the prefix strings for org-mode are not Org headings, e.g.
(setf (alist-get 'org-mode gptel-prompt-prefix-alist) "@user\n")
(setf (alist-get 'org-mode gptel-response-prefix-alist) "@assistant\n")
Otherwise, the default prompt prefix will make successive prompts sibling headings, and therefore on different conversation branches, which probably isn’t what you want.
Note: using this option requires Org 9.6.7 or higher to be available. The ai-org-chat package uses gptel to provide this branching conversation behavior for older versions of Org.
You can declare the gptel model, backend, temperature, system message and other parameters as Org properties with the command gptel-org-set-properties
. gptel queries under the corresponding heading will always use these settings, allowing you to create mostly reproducible LLM chat notebooks, and to have simultaneous chats with different models, model settings and directives under different Org headings.
gptel’s default usage pattern is simple, and will stay this way: Read input in any buffer and insert the response below it. Some custom behavior is possible with the transient menu (C-u M-x gptel-send
).
For more programmable usage, gptel provides a general gptel-request
function that accepts a custom prompt and a callback to act on the response. You can use this to build custom workflows not supported by gptel-send
. See the documentation of gptel-request
, and the wiki for examples.
To be minimally annoying, gptel does not move the cursor by default. Add the following to your configuration to enable auto-scrolling.
(add-hook 'gptel-post-stream-hook 'gptel-auto-scroll)
To be minimally annoying, gptel does not move the cursor by default. Add the following to your configuration to move the cursor:
(add-hook 'gptel-post-response-functions 'gptel-end-of-response)
You can also call gptel-end-of-response
as a command at any time.
For dedicated chat buffers: customize gptel-prompt-prefix-alist
and gptel-response-prefix-alist
. You can set a different pair for each major-mode.
Anywhere in Emacs: Use gptel-pre-response-hook
and gptel-post-response-functions
, which see.
Any model options you set are saved for the current buffer. But the redirection options in the menu are set for the next query only:
You can make them persistent across this Emacs session by pressing C-x C-s
:
(You can also cycle through presets you’ve saved with C-x p
and C-x n
.)
Now these will be enabled whenever you send a query from the transient menu. If you want to use these saved options without invoking the transient menu, you can use a keyboard macro:
;; Replace with your key to invoke the transient menu:
(keymap-global-set "<f6>" "C-u C-c <return> <return>")
Or see this wiki entry.
Yes, see transient-suffix-put
. This changes the key to select a backend/model from “-m” to “M” in gptel’s menu:
(transient-suffix-put 'gptel-menu (kbd "-m") :key "M")
transient-suffix-put
. This changes the key to select a backend/model from “-m” to “M” in gptel’s menu:(transient-suffix-put 'gptel-menu (kbd "-m") :key "M")
gptel uses text-properties to watermark LLM responses. Thus this text is interpreted as a response even if you copy it into another buffer. In regular buffers (buffers without gptel-mode
enabled), you can turn off this tracking by unsetting gptel-track-responses
.
When restoring a chat state from a file on disk, gptel will apply these properties from saved metadata in the file when you turn on gptel-mode
.
gptel does not use any prefix or semantic/syntax element in the buffer (such as headings) to separate prompts and responses. The reason for this is that gptel aims to integrate as seamlessly as possible into your regular Emacs usage: LLM interaction is not the objective, it’s just another tool at your disposal. So requiring a bunch of “user” and “assistant” tags in the buffer is noisy and restrictive. If you want these demarcations, you can customize gptel-prompt-prefix-alist
and gptel-response-prefix-alist
. Note that these prefixes are for your readability only and purely cosmetic.
Doom binds RET
in Org mode to +org/dwim-at-point
, which appears to conflict with gptel’s transient menu bindings for some reason.
Two solutions:
- Press
C-m
instead of the return key. - Change the send key from return to a key of your choice:
(transient-suffix-put 'gptel-menu (kbd "RET") :key "<f8>")
(HTTP/2 429) You exceeded your current quota, please check your plan and billing details.
Using the ChatGPT (or any OpenAI) API requires adding credit to your account.
Other Emacs clients for LLMs prescribe the format of the interaction (a comint shell, org-babel blocks, etc). I wanted:
- Something that is as free-form as possible: query the model using any text in any buffer, and redirect the response as required. Using a dedicated
gptel
buffer just adds some visual flair to the interaction. - Integration with org-mode, not using a walled-off org-babel block, but as regular text. This way the model can generate code blocks that I can run.
Connection options | |
---|---|
gptel-use-curl | Use Curl (default), fallback to Emacs’ built-in url . |
gptel-proxy | Proxy server for requests, passed to curl via --proxy . |
gptel-api-key | Variable/function that returns the API key for the active backend. |
LLM request options | (Note: not supported uniformly across LLMs) |
gptel-backend | Default LLM Backend. |
gptel-model | Default model to use, depends on the backend. |
gptel-stream | Enable streaming responses, if the backend supports it. |
gptel-directives | Alist of system directives, can switch on the fly. |
gptel-max-tokens | Maximum token count (in query + response). |
gptel-temperature | Randomness in response text, 0 to 2. |
gptel-use-context | How/whether to include additional context |
Chat UI options | |
gptel-default-mode | Major mode for dedicated chat buffers. |
gptel-track-response | Distinguish between user messages and LLM responses? |
gptel-track-media | Send images or other media from links? |
gptel-prompt-prefix-alist | Text inserted before queries. |
gptel-response-prefix-alist | Text inserted before responses. |
gptel-use-header-line | Display status messages in header-line (default) or minibuffer |
gptel-display-buffer-action | Placement of the gptel chat buffer. |
Org mode UI options | |
gptel-org-branching-context | Make each outline path a separate conversation branch |
Hooks for customization | |
gptel-save-state-hook | Runs before saving the chat state to a file on disk |
gptel-pre-response-hook | Runs before inserting the LLM response into the buffer |
gptel-post-response-functions | Runs after inserting the full LLM response into the buffer |
gptel-post-stream-hook | Runs after each streaming insertion |
gptel-context-wrap-function | To include additional context formatted your way |
gptel-rewrite-default-action | Automatically diff, ediff, merge or replace refactored text |
Other Emacs clients for LLMs include
- llm: llm provides a uniform API across language model providers for building LLM clients in Emacs, and is intended as a library for use by package authors. For similar scripting purposes, gptel provides the command
gptel-request
, which see. - Ellama: A full-fledged LLM client built on llm, that supports many LLM providers (Ollama, Open AI, Vertex, GPT4All and more). Its usage differs from gptel in that it provides separate commands for dozens of common tasks, like general chat, summarizing code/text, refactoring code, improving grammar, translation and so on.
- chatgpt-shell: comint-shell based interaction with ChatGPT. Also supports DALL-E, executable code blocks in the responses, and more.
- org-ai: Interaction through special
#+begin_ai ... #+end_ai
Org-mode blocks. Also supports DALL-E, querying ChatGPT with the contents of project files, and more.
There are several more: leafy-mode, chat.el, gpt.el, le-gpt, robby.
gptel is a general-purpose package for chat and ad-hoc LLM interaction. The following packages use gptel to provide additional or specialized functionality:
- gptel-quick: Quickly look up the region or text at point.
- Evedel: Instructed LLM Programmer/Assistant
- Elysium: Automatically apply AI-generated changes as you code
- gptel-extensions: Extra utility functions for gptel
- ai-blog.el: Streamline generation of blog posts in Hugo
- magit-gptcommit: Generate Commit Messages within magit-status Buffer using gptel
- consult-omni: Versatile multi-source search package. It includes gptel as one of its many sources.
- ai-org-chat: Provides branching conversations in Org buffers using gptel. (Note that gptel includes this feature as well (see
gptel-org-branching-context
), but requires a recent version of Org mode (9.67 or later) to be installed.) - Corsair: Helps gather text to populate LLM prompts for gptel.
- Abin Simon for extensive feedback on improving gptel’s directives and UI.
- Alexis Gallagher and Diego Alvarez for fixing a nasty multi-byte bug with
url-retrieve
. - Jonas Bernoulli for the Transient library.
- daedsidog for adding context support to gptel.
- Aquan1412 for adding PrivateGPT support to gptel.
- r0man for improving gptel’s Curl integration.