-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature request] AI #56
Comments
I tried something like this in XCode (Apple's IDE) and didn't like the result at all so I don't want to spend much time on trying to set it up. In any case, let me know what you learn and if there's some problem with the plugin, I'll try to fix it. |
Hi @techee Main configuration
LSP-AI in Geany
I have wanted to ask you also some questions. In Helix for chat in editor you must send request to lsp server with Also what do you think about supporting Thanks again for such a great project. Regards. P.S. example of AI in Geany with oatmeal chat in terminal with ollama backend and qwen2.5:0.5b llm. |
@Johnmcenroyy Thanks for testing this (I haven't tried it myself though).
The command_1_regex=Chat and in Geany's Edit->Preferences->Keybindings you assign the keybinding you wish for this action. Then, this keybinding will always start the chat session for you. I've just added another keybinding to directly show the code lens menu which might be more convenient than right-clicking and navigating to the Commands submenu.
Alright, I haven't studied the 3.18 draft specification yet. This wouldn't be such a big problem on the LSP side, the bigger problem is how to display it in the Scintilla editor as I think it doesn't support anything like this grayed-out text over which you can type and which gets "official" and colorized only after pressing some keybinding. As far as I know, when you insert something in Scintilla, you'll get it fully colorized immediately and it behaves as the rest of the code. For the same reason I don't support the |
Possibly something like this (with some limitations) could be implemented using |
By the way there's also https://github.com/TabbyML/tabby which seems to be more popular than LSP-AI. |
Ah, yes, my fault, must read more carefully docs, I have always opened default conf file and docs but didn't notice that.
The problem is that there is no commands in Commands submenu. Will be grateful to you if you can look into it.
Super !)
So let's wait Scintilla support )
Interesting, but it seems that there is no lsp server for now that supports
Oh, really interesting project, seems that for first time they didn't had lsp server but now there is one. Thanks for info and additions to plugin. |
Thanks for the logs - those were really helpful. I believe I've fixed the problem - I've just added
Not sure if this ever happens but https://scintilla.org/ScintillaDoc.html#Annotations might work at least somewhat. In any case, I'll probably just wait until Just curious - do these AI tools provide some useful stuff for normal coding apart from those typical demo things like min/max/factorial/fibonacci numbers/quick sort/etc? |
Thank you very much, yes it works now
Really don't know what to say, I was curious about this also ) Let's say if completion really works (ideally) by adding whole methods etc. I am not shure that this is good idea because you can't get the full image of code in brain, but maybe it depends on psychology of certain people. I was really interested to test local AI to not depend on services and so on, so the quality really depends on llm, for me it seems the best was Gemma2 with 9b base, but it needs to run on vulkan or cuda/rocm to be really usable, only Gemma2 9b wrote fully functional calculator ) and there is Gemma2 with 27b base but I can't run it on my hardware. For now on my setup I think chat can be useful in some situations but not shure about completions, maybe with inline completions and good llm it worth it, it needs more testing with various llms and configuration. Overall this demo https://www.tabnine.com/blog/introducing-inline-code-completions/ looks nice. There is interesting new study about AI and its help for coding So for now lsp-ai works - chat and completions, but it needs inline completions (lsp-ai doesn't support it for now), what concerns tabbyml it seems much more functional than lsp-ai but for now I cannot run it through Geany, will test it more. |
Thanks for your insight. Based on my experience with XCode which added something like that using some cloud implementation it seemed to kind of work sometimes, I just found it extremely distracting - it forces you to constatnly switch between writing the code you want to write and reviewing the LLM code suggestions if they make sense and at least for me, this isn't the workflow I like. So at least for now I'm not planning to spend much time in this area. |
Hi @techee Found interesting LSP-AI project (open-source language server bringing Copilot power to all editors, designed to assist and empower software engineers, not replace them) - https://github.com/SilasMarvin/lsp-ai. It seems doesn't work by default, but I think it needs more configuration itself. If you have time/interest in it please take a look. Thanks.
P.S. I will try to run it and post logs here and all info that I can find.
The text was updated successfully, but these errors were encountered: