- fix prompting in
org-ai-refactor-code
- fix progress reporter for non-streamed responses (fixes completion for o1 models)
- Undo evil integrations due to #133.
- Support for OpenAI o1-preview and o1-mini models. Set them as
:model o1-preview
parameter or via(setq org-ai-default-chat-model "o1-mini")
. - Fix for processing perplexity.ai responses. Thanks @lneely!
- evil integrations for Ex range commands. Thanks @tynesjo
- improve org-ai-on-project file selection:
- when a region is active on the file list, toggle the checkboxes in that region ("multi-select")
- keep selection when file filter is changed, this makes it easier to search and choose specific files in large projects
- show a summary and byte size of the selected files (it would probably be better to estimate tokens but for it's that)
- fix input in text fields in org-ai-on-project for keys that are bound to commands in non-text context
- Internal changes around API request handling that normalizes OpenAPI and Anthropic requests so that the same downstream logic can be used for both. In case code depended on the old behavior this might be a breaking change, so I bumped the minor version.
- Fixes for org-ai-on-project:
- Should now work with anthropic models
- Fixed prompt/completion insertion behavior
- The changes to
org-ai--openai-get-token
were not working as intended. This is now fixed.
org-ai--openai-get-token
checks for empty string, fixes #113, thanks @andreas-roehler!- Use the value of
org-ai-service
when computing messages. This fixes an issue when e.g. anthropic was used as a service and no :service header arg was specified, e.g. in the non-org mode functions. Thanks @dangirsh!
Add new openai models to org-ai-chat-models
and check if a model name is mistyped. Requested by #110.
Option to auto-fill paragraphs on insertion from the ai: With (setq org-ai-auto-fill t)
set this behavior will invoke fill-paragraph
on insertion when not inside a code block.
- fix(org-ai-openai-image): make generating images work with Org 9.7. Thank you @JakDar!
- org-ai blocks now accept a service parameter to switch between different endpoints and service-specific behavior:
#+begin_ai :service anthropic
...
#+end_ai
The same can be achieved globally by setting org-ai-service
. Possible values are listed here.
-
support for the Anthropic Claude API. See the readme
-
support for the perplexity.ai API. See the readme
- org-ai block attributes can now be set with org drawer propert (see #99 https://github.com/rksm/org-ai#for-chatgpt) Thank you @doctorguile!
- Support for DALL·E-3! Should now work out of the box. org-ai image blocks now also support attributes like
:model
,:quality
, andstyle
. See the README for details.
- Fix for selecting text in
org-ai-prompt-in-new-buffer
- Added
org-ai-prompt-in-new-buffer
command to query for a prompt and then run that in a new buffer.
- Mark
org-ai-default-inject-sys-prompt-for-all-messages
as deprecated. No need for this any longer. - Fix inserting system prompt, thx @doctorguile!
- Added notes about how to use Azure and Azure specific auth, thx @tillydray
- Introduced
org-ai-oobabooga-create-prompt-function
that can be used to customize the prompt creation for local LLMs. It defaults toorg-ai-oobabooga-create-prompt-default
which uses the values of the variablesorg-ai-oobabooga-system-prefix
,org-ai-oobabooga-user-prefix
andorg-ai-oobabooga-assistant-prefix
to assemble the prompt text. Example:
(funcall org-ai-oobabooga-create-prompt-function
[(:role system :content "system")
(:role user :content "hello")
(:role assistant :content "world")])
;; => "PROMPT: system\n\nYou: hello\n\nAssistant: world\n\n"
- Add support for local LLMs via oobabooga/text-generation-webui. See issue #78.
- add macro
org-ai--org-element-with-disabled-cache
.org-element-with-disabled-cache
is not available pre org-mode 9.6.6. (resolves #77)
- Fix
org-ai-global-mode
definition (#76)
- Add
(require 'org-macs)
fororg-element-with-disabled-cache
that was added in 0.3.10
- Allow dynamic
org-ai-on-region-file
(#71) (i.e you can add a function to handle it) - Emacs 27 as minimal required version
- call
org-element-cache-reset
after insertion to avoid org warnings (see #16 and #63) - Double check special block properties when extracting org-ai block.
- Add
gpt-3.5-turbo-16k
toorg-ai-chat-models
- fix detecting role vs content in the API response that has changed with the last OpenAI update (#58)
- Support for stable diffusion based image generation through stable-diffusion-webui (Huge Thanks @yaak0!).
#+begin_ai :sd-image
<PROMPT>
#+end_ai
- Better noweb support. (Big Thank You @togakangaroo!)
- make token retrieval via
auth-source
on demand
- noweb support (#46)
- Bind canceling
org-ai-on-project--confirm-selection
toC-c C-k
instead ofC-c k
.
- Ensure that request errors are shown (fix #43)
- Use markers instead of positions when inserting streamed content
- For the global mode, use a prefix-map.
org-ai-global-prefix-map
, it is bound toC-c M-a
by default but can be easily bound to another key. - don't show
org-ai-global-mode
in the mode-line - no
show-trailing-whitespace
for org-ai-on-project - load
org-ai-talk
by default
org-ai-on-project
: Load by default.
- with consecutive
org-ai-kill-region-at-point
invocations, append kill in right order (#32)
org-ai-on-project
: Offers a new method for running prompts and modifying multiple files at once.
org-ai-on-region
(and related functions such asorg-ai-refactor
) do not quote user input with "> " anymore as this could be repeated in the output (see #30)org-ai-on-region
now outputs both prompt and answer into anorg-mode
buffer. This allows to easily edit the answer and re-run the prompt withC-c C-c
(see #29). You can customize the variableorg-ai-on-region-file
to specify a file to store the conversations in. E.g.(setq org-ai-on-region-file (expand-file-name "org-ai-on-region.org" org-directory))
. New prompts will be appended.
- Deactivating read-only mode when reusing the
*org-ai-on-region*
buffer. This blocked repeated on-region requests if the buffer was still open.
- Another attempt to fix unicode / multi-byte inputs. The string encoding we had before 0.2.3 was correct, but I missed that the entirety of the request body needs to be encoded as utf-8, including the headers need to be utf-8. In my case my openai auth token was not utf-8, and so i got a
Multibyte text in HTTP request
and wrongly assumed this is due to the content. This should be fixed now.
- Make 'org-ai-summarize' treat the region as text not as code (#22)
- fix "Error on checkbox C-c C-c after org-ai loaded" (#23)
- correctly encode UTF-8 strings when sending them to the API. This should fix non-ascii characters and multi-byte characters such as emojis in the input.
- start markdown codeblock AI responses on their own line (see #17 (comment))
- add
"ai"
toorg-protecting-blocks
to fix the block syntax highlighting.
org-ai-switch-chat-model
- No max-tokens in ai snippet
- modified global shortcuts:
C-c M-a r
:org-ai-on-region
C-c M-a c
:org-ai-refactor-code
C-c M-a s
:org-ai-summarize
C-c M-a m
:org-ai-switch-chat-model
C-c M-a !
:org-ai-open-request-buffer
C-c M-a $
:org-ai-open-account-usage-page
C-c M-a t
:org-ai-talk-input-toggle
C-c M-a T
:org-ai-talk-output-toggle
- Make greader dependency optional
- In org-mode /
#+begin_ai..#+end_ai
blocks:C-c r
to record and transcribe speech via whisper.el in org blocks.
- Everywhere else:
- Enable speech input with
org-ai-talk-input-toggle
for other commands (see below).
- Enable speech input with
- Enable speech output with
org-ai-talk-output-enable
. Speech output uses os internal speech synth (macOS) orespeak
otherwise.
org-ai-prompt
: prompt the user for a text and then print the AI's response in current buffer.org-ai-on-region
: Ask a question about the selected text or tell the AI to do something with it.org-ai-summarize
: Summarize the selected text.org-ai-explain-code
: Explain the selected code.org-ai-refactor-code
: Tell the AI how to change the selected code, a diff buffer will appear with the changes.
- In org-mode /
#+begin_ai..#+end_ai
blocks:- Press
C-c <backspace>
(org-ai-kill-region-at-point
) to remove the chat part under point. org-ai-mark-region-at-point
will mark the region at point.org-ai-mark-last-region
will mark the last chat part.
- Press
org-ai-open-account-usage-page
show how much money you burned.org-ai-install-yasnippets
install snippets for#+begin_ai..#+end_ai
blocks.org-ai-open-request-buffer
for debugging, open the request buffer.
- Support for retrieving
org-ai-openai-api-token
fromauthinfo
file.
- org-ai yasnippets
- Correctly utf decode/encode input/output.
- Set mark before inserting output. This allows to quickly select the output with
C-x C-x
. - Fix parsing messages that contain a
:
.
- Add
org-ai-default-chat-system-prompt
,org-ai-default-inject-sys-prompt-for-all-messages
,:sys-everywhere
option and allow specifying custom per-block[SYS]:
system prompts. - Correctly assign system, user, assistant roles (see https://platform.openai.com/docs/guides/chat/introduction). Before I mixed up system/assistant roles.
#+begin_ai...#+end_ai
blocks that can be used for:
- chatgpt input / output
- "normal" text completion with older gpt models
- image genration (text -> image)
org-ai-image-variation
command to generate variations of an image.